Comments

  • On Rationality
    You have the problem that this is reducing rational behaviour to its lowest atomistic common denominator. So you are in fact creating a social system with this as the bias being emphasised.

    Rationality is about a calculus of interests. But in a natural system, like a society, it would be the view seen across some proper spatiotemporal scale. So what can a single person see? Or what even can a society see if it is just thinking of the immediate now and not the next several generations?

    Rationality can only be applied to the scope it is given. And if history doesn't exist on any single spatial or temporal scale, but instead unfolds in a complex adaptive manner across many scales, then that is the kind of "actor" that has to be at the "centre" weighing the possibilities.

    So sure, game theory captures the dynamics of rational interactions. You can figure out the balance between competing and cooperating as the best way to get to some goal.

    But your OP is already swallowing right wing market theory as the efficient information method. That definitely works - but it has to be tied back to the left wing thing of socialised institutional view of what matters in the long run.

    Of course, new wave economics is trying to make that shift. There are all sorts of moves, like national happiness indexes and triple bottom-line accounting, that attempt to lift the horizons when it comes to applying market judgements. Neo-liberalism was about stripping the institutional wisdoms out to maximise the accelerative freedoms. Now the collective fate of humanity and the planet need to be back in the picture if we are to call ourselves truly rational creatures.

    So utilitarianism is half the story. Ensuring that it actually is applied over a broad enough sweep of our history to come is the current challenge.

    Economics is not so dumb that it can't understand that. But the problem is that global society has built up such extremes of inequality that self-interest has become polarised. What would be rational for those at the top is rather too divorced with what would work best for those at the bottom.

    The system could stumble along a few more years. But to celebrate it as a state of enlightened self-interest - an epitome of rationality - is a little premature.
  • Interpretive epistemology
    Do you argue that it's the world we live in? Or the created world of reality?tim wood

    My argument is triadic. So it incorporates all three things of the self, its world, and then the world.

    There is no "we" apart from as an emergent aspect of the world we create. Psychologically, sensory experience comes into focus as a felt distinction between what is self and what is other. That attribution is how we arise - within a world we create.

    Then this is going on within the actual world to which it is an embodied response. So there is the world out there, and then the us in here. Except the us in here is part of the sensory model. Stones and hands are examples of the model's meaningful distinctions.

    So of course there really is a real world. But psychologically, what we need to experience is a world with us in it. So the real world is not yellow or blue. But our psychological models make sense when "we" feel we exist within a world of coherent objects. And having colour is a great way to manufacture that object coherence. We can see a banana because of the way it constantly pops out of cluttered visual landscape.

    The distinction being that if it's reality, then knowledge - interpretations that work in reality - are never quite about the world. That would leave a troublesome gap.tim wood

    But that is the point. The gap is not troublesome but functional. It is the symbolism that frees us from the world as it actually is and thus allows "us" to actually exist.

    Why should a banana look yellow? The yellowness of yellow is such an arbitrary fact when you think about it. Like sweet being sweet, it is the arbitrary nature of qualia that convinces people of the hard problem of consciousness.

    But my approach is saying that the arbitrariness of the symbol is the point. That is why symbolisation can work. Whether we shake hands or kiss cheeks, a friendly greeting is a friendly greeting. What matters is that there is a symbolic gesture to anchor the psychological reality.

    What matters in sensory discrimination is that the brain have a dramatic reaction to what counts in terms of making an ecological difference. So if I want to see fruit in a tree, then I want to see yellow and red as a violent contrast to green, even if - physically speaking, in terms of wavelength - they are only fractionally different in energy frequency. To see red as the diametric opposite of green - which is the way colour channel opponency works - is completely untrue of the world as it actually is, and yet hugely psychologically convenient.

    And that is the kind of freedom from veracity which is at the base of actual "knowledge". You can't symbolise an understanding if you are tied to simply trying to re-present what actually exists. You need to break the physical connection of the world to start forming a semiotic modelling relation with it.

    So the model is always pragmatically about the world - it always has to live in it. But it is always also a model. It is not based on direct truth or faithful recreation. Even at the level of basic perception, it has to be an efficient narration. It is the construction of some interpretable system of sign - of which "we" emerge as the consistent and persisting narrative core.

    I buy Heidegger, in that I think we're already in the world, and that would eliminate the gap.tim wood

    As best as I understand Heidegger, he goes with the fact that experience is for us an umwelt. So it is both a psychological construct and also a lived response to an actual world.

    That sets up the epistemic dilemma. And the solution is not to worry about the gap but to realise that the epistemic cut is how we could even exist. We only arise because interpretation is what it is all about. Once a system develops some habit or interpretance, then there is an "interpreter" in play. The self emerges as a product of the model - the other to the other that is the world.

    But Kant's question as to how I know it is a hammer, with the corollary that I can't know, is still there.tim wood

    Yes. Kant brought the psychological dilemma to the fore. But he left people feeling it was a problem - the bug and not the feature.

    Peirce's does indeed seem to be an account that works and makes sense, but the Kantian question seems still to endure.tim wood

    Again, the question is answered if the knowledge gap is not the bug but the feature. And modern psychology would say the semiotic view puts the Kantian concerns safely to bed.

    Pragmatism is about being able to accept irreducible uncertainty as part of the game - the game being to minimise uncertainty. If you still yearn for absolute knowledge about anything, you are stuck in olden times. :)

    This moves towards a radical (imo) destruction of "knowledge" as a term meaningful in itself, or at least away from any naive idea of knowledge I might have had.

    I cannot rid myself of is the notion of bias in the form of the presuppositions that necessarily are part of the building materials of "interpretance." Or in short, that such is just an obscuring accommodation that happens to work
    tim wood

    So where is your yearning for absolute certainty coming from? Why is relative sureness not enough? Why is the standard human ability to operate on partial and uncertain information not in fact a huge advantage?

    A computer, as a logical machine, is only as good as its certainties. Garbage in, garbage out. But organisms swim freely in shifting uncertain worlds and thrive. Mistakes are how they learn. Knowledge is always provisional.

    Nature has its epistemology. And it is pragmatic.
  • Interpretive epistemology
    The CCP sure would like that.Wayfarer

    The Chinese Communist Party? What are you smoking today.

    Again, you are just wanting to wheel out your standard attack on Scientism. And if a Pragmatist wanders into your sights, you are going to light him up because that's close enough for you.

    (What was I saying about how we construct unwelts to legitmate our habits of action?)
  • Interpretive epistemology
    It all goes back to Kant, IMO.Wayfarer

    Whatever floats your boat.

    But in any case, aside from pragmatism and concern with what works, there's the issue of knowledge of the good, the true, from a perspective other than the pragmatic - something to set the moral compass against.Wayfarer

    Is there? Maybe you just define pragmatism in terms of actual selfishness rather than a collective self-interest. I say that morality represents what works for our collectivised selfhood. And what that would be - in terms of our habits of action - is something we collectively would aspire to know

    So you are pretending that pragmatism seeks to rule out what it in fact aims to explain.
  • Interpretive epistemology
    The essence of interpretation is creation. All that is created is created within the limits of the creating.tim wood

    What would be key to the Peircean semiotic view I'm expressing is that interpretations actually have to live in the world. So they are not free creations ... in the long run at least. To survive, they must prove themselves useful habits. They must stabilise a working relationship that is then defined as being about a self in a world.

    So the self-interested aspect of the epistemology does not have to collapse back towards idealism. Pragmatism presumes that the modelling relation only exists because it makes sense. The world is out there. And that is how it can be a possibility that a selfhood can develop which is taking its own purpose-laden point of view.

    I know that 2+2 = (is) 4, and that the stone on my desk just is a stone.

    The only way to reconcile this knowledge (that I take as certain) with its essential createdness is to suppose that as knowledge it comes into being - is created - when I think of it.
    tim wood

    Again, Peirce would stress that thinking in some fashion is not a fresh creation of every encounter with the world but instead the development of a reasonable habit. It is an action-oriented view of epistemology. And it is through our minimisation of accidents or mistakes that we move towards the best possible habits of interpretance.

    So yes, every encounter is the chance to make shit up in some random fashion. We need the power to hypothesise to get things started. But mindfulness is a state of established habit. We emerge as a self by building up the steadiness of a habitual point of view.

    So recognising stones as stones is the kind of habit that we develop. Now a "stone" is a concept laden with plenty of self-interest. We can do things with stones that we can't do with marshmallows, pebbles or cats. If I want to smash open an oyster, my mind will leap towards the idea of finding something that is enough like a stone. I won't look at a cat or marshmallow and feel I have the solution to the problem in hand.

    The whole point of pragmatism would be not to collapse epistemology into either of the usual categories of idealism or realism. Both those presume the knowing self just exists. What is in debate is whether the world also just exists as it is experienced.

    Pragmatism instead takes the psychological route of accepting that selves emerge as models of the world. So the world exists - in some concrete sense. And the mind emerges as a collection of interpretive habits. The less appreciated fact that follows is that the mind then exists by virtue of an epistemic cut - its ability to read the world as an unwelt or system of signs. Things do take an idealistic bent by the end as we very much live within our own psychic creation.

    Every "stone" I see is a token of some notion of "stone-hood". If I need to crack open an oyster, I will recognise stone-hood in my wife's golf club. She might then see something very different if she catches me messing around with her precious nine iron.

    Knowledge of arithmetic is then reflective of semiosis taken to a higher level of abstraction. Ordinary language developed to encode a collective social view of the world and hence a collective social conception of human selfhood. We are socially constructed through the habits of speech. We all lean to think of the world in the same essential way when it comes to stones, golf clubs, cats and marshmallows. Words are the way we structure a generalised human relation to the world and so arrive at a generic selfhood shared at a cultural level.

    But as humans, we have moved on to add a mathematico-logical level of semiosis to our sense of selfhood. We invented a language based on numbers - pure generic symbols. So this is a new epistemic game with its own set of rules. Ordinary language is meant to be all about living in the world as a self-interested tribe of humans. Symbolic language is the attempt to step outside that zone of obvious self-interest and talk about the world in a disinterested or objective fashion.

    So it is its own game. It relies on a strict separation of the notions of quality and quantity - the generality of some essence and then the particularity of the consequent acts of measurement. The scientific viewpoint, in short. Once I have a notion of stonehood as "a thing in itself, a quality of the world", then I can start counting individual stones.

    Of course, the essence of a stone is a hard to define thing. But the trick is that a mathematico-logical level of semiosis is based on an active rejection of any personal interest - golf clubs can't count as my desire to crack oysters is clearly "too subjective". Instead, objective knowledge has to be based on the quantification of the most universal kinds of measureable qualities - like size, shape, weight, density, structure, etc. So the right attitude to classify stones as stones is to establish constraints, like a stone has fall with in some band of weight, solidity, size, translucency, or other generalised "physical" properties.

    The obvious idea is that we are giving up our clearly self-interested view of the world to adopt one based on the most abstracted and unselfish possible point of view. Physics can't deny the essential facts of our stone - that is a fragment of rock, worn reasonably smooth, and of a size that is between a pebble and boulder. And from the definition of one stone, we can find other stones. Then we can apply the principle of identity and get right into all the arithmetical and logical operations which shift individuated things about in atomistically-deductive patterns.

    So epistemology exists on multiple levels of semiosis. And it is in recognising the self-interest inherent in an epistemic relation with the world that we can in turn construct a formally self-disinterested level of semiosis. Epistemology itself can be extremitised now so that we live with a dramatic contrast between our subjective knowing - as might be expressed through poetry, art, and other cultural forms - and our objective knowing, which is the business of science and maths.

    So we have actually constructed a deep conflict in which there are two paths to true knowledge, it appears. But again, the pragmatist will point out that we, as humans, are still having to give priority to actually having to live in the real world. Both the subjectivist and the objectivist have all their pretty rhetoric about their ways of knowing. Yet both are still bound by the fact that knowing is about acting, and all that results from having acted. So both the objective and the subjective extremes are going to be "found out" in practice.

    The habits that survive that test are the habits that did in some sense work. The selfhood that resulted was one adapted to "its" world. Knowledge wasn't either found or created in the process. But a state of knowing - a state of interpretance - could be observed to persist in a self-sustaining fashion. It did the job.
  • Do you believe there can be an Actual Infinite
    I didn't make any point regarding physical continuity (if space can even be called physical).MindForged

    What did you mean by space being "actually infinite" then?

    From the very beginning is took issue with the OP's assumption that any sort of actual infinity was impossible in virtue of pure logic (because, supposedly, contradictions crop up).MindForged

    The OP might not have been perfectly expressed but it did seem to be arguing from the famous paradoxes that arise from taking the maths "too seriously" as a physicalist.

    Now the usual line from the maths-lover is that the maths got fixed to resolve the problems. And my reply to that is: not so fast buddy. :)
  • Do you believe there can be an Actual Infinite
    That's not true, using an infinity is not the same as a singularity occurring in the theory.MindForged

    Yes. So what I am saying is you really want to be able to build "infinities" into your models, and you really want to avoid getting "infinities" back out.

    They are great if they can be just assumed in background fashion. They are a horror if that is what the calculation returns as its sum.

    But either way, these "infinities" have epistemic status rather than ontic. We realise that as backdrop assumptions, they are strong simplifications. And as calculational outcomes, we are quite within our rights to ignore them and create some kind of work-around.

    Space under relativity is treated as a continuum...MindForged

    That is way too simplistic. Relativity treats spacetime as a pseudo-Riemannian differentiable manifold. As a space, the continuity is about the ability to maintain certain general symmetries rather than any physical continuity as such.

    Blackholes and wormholes can punch holes in the fabric - those nasty singularities - and yet still the general co-variance can be preserved with the right set of yo-yoing symmetries to take up the slack.

    So relativity took away the kind of simple spatial infinity presumed under Euclid/Newton and replaced it with something that still worked. Actual continuity was replaced by the virtual continuity of unifying symmetries ... plus now the stabilising extra of physical measurements of local energy densities. A bunch of discrete local values to be added to the model and no longer able to be taken for granted.

    But my point was that we still make assumptions (crucial, necessary ones) regarding the existence of infinity in the world as well (relativity and QM both do so), so the notion of an Actual Infinity isn't off the table.MindForged

    But you are talking about a very classical notion of infinity. And that is clearly off the table so far as modern physics would be concerned.

    As I said early on, the best way to characterise things now is that the interest lies in how classicality emerges. So it is the development of finitude from a more radical indeterminism which becomes the story we want to be able to model.

    To say the Universe is just "actually infinite" is hollow metaphysics - a way to avoid the interesting questions. What came before the Big Bang? Where does the Cosmos end? You seem to want to shrug your shoulders and say everything extends forever. That is what maths would say. So let's just pretend that is the case.

    But questioning these kinds of conventionalised notions of "the infinite" is precisely where current metaphysics needs to start. The answers aren't in. We are only just formulating a clear view of what we need to be asking.
  • Do you believe there can be an Actual Infinite
    As has been said a few times, several very solid theories make assumptions that include infinity.MindForged

    And you have been reminded a few times that these solid theories in fact depend on working around the infinities they might otherwise produce. So it ain't as simple as you are suggesting.

    The way to understand this is that modelling seeks the simplest metaphysical backdrop it can get away with. So it is a convenience to treat flatness, extension, coherence, or whatever, as "infinite" properties of a system. If you can just take the limit on some property, it becomes a parameter or a dimension - a basic degree of freedom that simply exists for the system. You don't have to model it as a variable. It is part of the ontic furniture.

    So it is for good epistemic reason that physical models appear to believe quite readily in the infinite. If you are going to have a line that extends, it might as well be allowed to extend forever without further question. That way it drops out of the bit of the world that needs to be measured and becomes part of the world that is presumed. As a degree of freedom, it is fundamental.

    But the history of physics is all about the questioning of the fixity of any physical degree of freedom. Everything has wound up being contextual and statistical. Newton said space and time were flat and infinitely extended. Einstein said spacetime is instead of undefined curvature and topology. You had to plug in energy density measurements at enough points to get some predictable picture of how it in fact would curve and connect. Newtonian infinity would then emerge as a special case - an exceptional balance point of in fact impossible stability. Some kind of further kluge, like a cosmological constant, would be needed to give a gravitating manifold any actual long-term extension at all.

    So if we look at the actual physics, it does seek the "infinities" or taken-for-granted degrees of freedom which can become the "eternal" backdrop of a mechanical description. You've got to find something fixed to anchor your calculational apparatus to. So for good epistemic reasons, it seems that physics is targeting the continuous, the unboundedly extensible, the forever the same.

    But does it believe in them? Does it take them literally? Does it say they are metaphysically actual?

    By now, that would be a very naive ontology indeed. All the evidence says that nothing is actually fixed. It all just merely hangs together in a self-sustaining structured fashion.

    The mathematical notion of infinity is a very misleading one to apply in a physical context these days. The Euclid/Newton paradigm is old hat. Even in maths, geometry has become deconstructed as topology. Space is flat, lines are straight, change is linear, only as the extreme case of a maximal constraint on the possible degrees of freedom in fact. Instead of being fundamental, the perfect regularity and simplicity of a classical geometry is the most exceptional case. It requires a lot of explanation in terms of what removes all the possible curvature, divergence, and other non-linearities.
  • Do you believe there can be an Actual Infinite
    ...imagine as you get closer to the edge of the universe time slows down and right at the edge time stops. So it’s impossible to poke a spear through the edge of the universe because there is no space time in which to poke the spear.Devans99

    Continuing on the "resolution limit" approach now being taken, this would be modelled relativistically in terms of holographic event horizons. So you could imagine "poking your spear" into the event horizon surrounding a black hole, or across the event horizon that bounds and de Sitter spacetime.

    In a rough manner of speaking, your spear would suffer time dilation as you jabbed it into the black hole. It would start to take forever to get anywhere.

    Or if you poked it across the event horizon that marks the edge of the visible universe, then it would disappear into the supraluminal realm that exists beyond.

    So relativity itself already tells us that there is a radical loss of the usual classical observables when we arrive at the "edge" as defined by the Planck constants of nature. There is a fundamental grain of being, a grain of sharp resolution, which the constants define. Then if we try to push beyond that, then the customary classical definiteness of things begins to break down in ways the theory predicts. The distinctions that seemed fundamental dissolve away.

    The conventional way of thinking about spacetime is that it must exist in some solid and substantial fashion. It is just there. So the metaphysical issue becomes how can a backdrop begin and end? By definition, a backdrop just is always there ... everywhere. So spacetime simply has to extend infinitely to meet the criteria.

    But the emergent view turns this around. Spacetime as a definite backdrop becomes an emergent region of high coherence. And being bounded or finite is the kind of organisation that has to get imposed to create such a state of being. You need some concrete limit - like the speed of light, the strength of gravity, the fundamental quantum of action - to structure a world. The triad of Planck constants are the restrictions that together form up the thing of a Universe with a holographic organisation and a Big Bang tale of development.

    The Universe is essentially a phase transition. Like water cooling and crystallising, it has fallen into a more orderly, lower energy, state. What changes things is not the magical creation of something new - like ice - but the emergence of further constraints that limit the systems freedoms. A solid is a liquid with extra restrictions, just as a liquid is a gas with emergent constraints.

    So what lies "beyond" any part of a universe is not simply more of the same. Nor is it something completely different. Instead, the distinction is one of resolving power. If the classical world is about a crystalline coherence, then beyond the edges of any patch of the coherent is simply ... the start of the incoherence.

    Crossing an event horizon is just that. It is imagining how things break down now that they are no longer integrated in the usual communicative fashion. Approach the edge and everything just dissolves towards a radical indeterminacy. What seemed definitely one thing or another becomes blurred and confused - a question no longer properly answerable.

    It is just like the edge of a cloud. At some point the fabric frays and it is not clear whether it is still largely cloud or now mostly sky. To argue that there has to be a definite answer - as in arguing about whether things are fundamentally discrete or continuous, finite or infinite - is to miss the point. That kind of constrained counterfactuality is the state that must emerge. It is the outcome and not the origin.
  • Do you believe there can be an Actual Infinite
    Physicists can give a very different answer to the binary question of whether spacetime is "fundamentally discrete" or "fundamentally continuous". They would say that quantum theory argues that it is neither. At base, it is vague or ambiguous. And then the classical binary distinction of discrete vs continuous is what emerges due to sufficient stabilising contextuality. You get a division into distinct events happening within a connected backdrop once a quantum foam has expanded and cooled enough for that to be the case.

    For example:

    While almost all approaches to quantum gravity bring in a minimal length one way or the other, not all approaches do so by means of “discretization”—that is, by “chunking” space and time. In some theories of quantum gravity, the minimal length emerges from a “resolution limit,” without the need of discreteness. Think of studying samples with a microscope, for example. Magnify too much, and you encounter a resolution-limit beyond which images remain blurry. And if you zoom into a digital photo, you eventually see single pixels: further zooming will not reveal any more detail. In both cases there is a limit to resolution, but only in the latter case is it due to discretization.

    In these examples the limits could be overcome with better imaging technology; they are not fundamental. But a resolution-limit due to quantum behavior of space-time would be fundamental. It could not be overcome with better technology.

    http://www.pbs.org/wgbh/nova/blogs/physics/2015/10/are-space-and-time-discrete-or-continuous/

    So the key shift in metaphysical intuition is to see reality as wholly emergent from raw potential. And that then means the infinite is always relative.

    The classical way of looking at it is that either the discrete is the fundamental - you start with some atomistic part and then are free to construct endlessly by the addition of parts - or the continuous has to be fundamental. You would start with an unbroken extent that you could then freely sub-divide into an unlimited set of parts.

    Note the presumption. It is all about a mechanical act, a degree of freedom, that can proceed forever without constraint. If you have a unit to get you started, there is nothing stopping you adding more units to infinity. Or if you have a line you can slice, there is nothing stopping you slicing it finer forever.

    It is a wonderfully simple vision of nature. But it is way too simple to match the material reality. So no matter how wonderfully maths elaborates on this naive constructionist ontology, we already know that it is too simplistic to be actually true.

    The alternative view is that individuation or finitude is context dependent. It is a resolution issue. Both the continuous backdrop and broken foreground swim into definiteness together. The more definite the one grows, the more sharply defined becomes the other.

    So it is like counting clouds in the sky. And beginning in a thin mist. While everything is just a generalised mist, it is neither one thing nor the other - neither figure nor ground, object nor backdrop. It is sort of sky, sort of cloud, but in completely unresolved and ambiguous fashion.

    Then the mist starts to divide and organise. It gets patchy. You start to have bits that are more definitely actual cloud, other bits that are actual sky. Keep going and eventually you have some classically definite separation. There is a nice tight fluffy white cloud that sticks out like a sore thumb against an empty blue background. The finitude and discreteness of the cloud emphasises the infinity and continuity of a sky that now goes on forever. You arrive at a state of high contrast. And it is difficult to believe that it could ever be any other way.

    Of course, physicists now know just how much of an idealisation this is. They even have the maths to model the actuality in terms of fractals. Real life cloud formations better fit a model which directly encodes the fact that individuation is a balance of a tendency towards discreteness and a tendency towards continuity. The holism of material systems means they have equilibrium properties, like viscosity.

    So in the connected world of a weather system clouds are generally bunched or dispersed according to some generalised ratio. They never were these classical objects with definite edges marking them off from the continuous void that surrounds them. All along, they were just a watery transition zone with a fractal balance and hence a fractal distribution in space and time. If you want to model the actual world of the cloud, you have to accept that this grave sounding metaphysical question - is the cloud discrete or continuous? - is pretty bogus.

    The actuality is that cloudiness is a propensity being expressed to some degree of definiteness. It can be in a state of high resolution, or low resolution, but it is always in some state of resolution - a balance between two complementary extremes. We imagine a reality that is polarised as either sky or cloud. Everything would have to be one or the other. Yet now even the maths has advanced to the point that we can usefully model a reality which is always actually in some fractional balance, always suspended between its absolute limits.

    The next step for fundamental physics is to apply that holistic metaphysics to our notions of spacetime themselves. And that is certainly what a lot of quantum gravity theories are about. The traditional classical metaphysical binaries - like discrete vs continuous and finite vs infinite - lose their power as it is realised that they are the emergent limits and not the fundamental starting options. Instead, where things begin is with simple vagueness or indeterminism. You have a quantum foam or some other new model of a world before it gains any definite organisation via the familiar classical polarities.
  • Do you believe there can be an Actual Infinite
    But if one removed all phyiscal mass and energy, both the visible and dark, wouldn't empty space simply be infinite vacuum?InfiniteZero

    No. An empty space is simply a matter field in its lowest possible energy state. This is now a central fact of cosmological thinking. It is what the holographic universe is all about.

    So an empty space is still full of the black body quantum radiation that is "generated" by its own event horizons. The universe at its heat death would still radiate internally with a Planck scale jitter - a photon gas. The photons would be as cold as possible - within Planck reach of zero degrees - and so have wavelengths about the size of visible universe. So about 32 billion lightyears in length. Unbelievably weak. Yet spacetime would always have this ineradicable material content there as part of what it is.

    Of course, mathematically you could imagine actually empty spaces. Maths does that routinely. In fact it is the basis of how it goes about the job of conceiving of spaces - as devoid of material content.

    But physics tells us that spacetime and energy content are connected at the hip. Matter tells spacetime how to curve and spacetime tells matter how to move, as Wheeler famously put it. They are two faces of the one reality.

    And so the job for maths is to catch up with reality if it can. At the moment, the existence of this connection is one of the kluges that have to be inserted by hand to make the cosmology work as a scientific model. It would be the big advance to make it emerge as a mathematical prediction.

    Why is there this Planckscale cut-off that prevents the universe either being infinitely energy dense (as the quantum corrections to any material particle says it should) or, alternatively, completely empty, as would be the case if the quantum jitter of spacetime itself only had a zero or infinitesimal contribution to make?
  • Do you believe there can be an Actual Infinite
    We take what mathematicians and logicians say seriously when we adopt the formal systems they create. That means that to use such systems we are committing ourselves to a particular kind of metaphysics. If you accept standard mathematics you cannot possibly claim that actual infinities are impossible in virtue of a contradiction. You might say that not every aspect of our particular universe can be infinitized, but there's no argument that the concept itself precludes instantiation in the world.MindForged

    There is a big difference adopting the maths because it is a useful model and accepting it as the actual metaphysics. And it should be telling that the central problems of modern physics/cosmology revolve around finding ways to avoid the mathematical infinities, or singularities, that are contained in the current best models.

    That is why quantum physics has to be built on kluges like renormalisation that give a semi-arbitrary means of just cancelling away most of the infinite quantum contributions to bare particle properties. The formal maths returns the answer to any question as "the quantum corrections sum to infinity". And then the physicist says we will just introduce a cut off factor that cancels away all that gross excess and leaves us with the exact sums that matches observation.

    So the infinity-generating maths can be tamed by introducing heuristic constraints. After that, the maths really works well. But there is then no particular reason why you would think the maths represents a good model of the actual metaphysics.

    It is the same everywhere you look in the physics. Particles are explained by symmetry maths. But the maths is too perfect usually. It sums to zero. Some other factor has to be added to the story to explain why there is a faint asymmetry in the mix such that not everything cancels away, leaving nothing. Matter and anti-matter can't be perfectly symmetrical otherwise all of one would annihilate all of the other, leaving no mathematicians or physicists.

    A Theory of Everything would aim to offer a completely mathematical description that did away with the various kluges that physics has been forced to develop to get rid of the pesky infinities and zeros. However my view is that this in turns requires a different maths of infinity. The metaphysics of the maths would be what has to give.

    Reality is already telling us that now. :)
  • Epistemic Failure
    Thanks Tim. Nice to hear.
  • Do you believe there can be an Actual Infinite
    Statements like space or time maybe actually infinite... nonsense.Devans99

    I think “nonsense” is too strong. But there is certainly a real metaphysical question here. Our mathematical models lead to rather glib beliefs about infinity. And our current physics makes it a much more complex and interesting issue.

    Principally I’m talking about the discovery that reality is quantum and so individuation is contextual. To arrive at some located number of entities, you have an emergent limit on how many can exist for a given material extent. This is the holographic bound on information or the light cone principle.

    So, in practice, space and time are materially constrained. They may be modelled as infinite dimensions, unlimited. Yet once matter and energy are added to the picture, then things look actually quite different. You have an ontology which is about finitude emerging from ambiguity, rather than one which presumes an underlying continuity that can be infinitely divided - at no physical cost - as is the case with the ur-model of the mathematical number line.

    So infinity is a mathematically revered notion. Folk like to apply it to metaphysics as if it were true. But modern physics points to a very different ontology of actualisation now. The maths is out of date.
  • Are proper names countable?
    Still don’t get it? My point was about physical limitations on logically inspired notions. An infinite string has the problem it can’t actually be said in less than infinite time.

    You reply by pointing out that this isn’t a problem if strings terminate in finite time. Way to go.
  • Are proper names countable?
    So you want to commit to the position that 0.999.... and 1 pick out two different proper names here? Cool.
  • Are proper names countable?
    There is no ambiguity to this infinity. It's an endless series of moments of speech.TheWillowOfDarkness

    Studiously missing the point as usual.
  • Are proper names countable?
    It would however take an infinite time do indicate exactly which individual one was referring to.andrewk

    Like I said then.
  • Are proper names countable?
    That's not an issue of the speaker because they are the one taking the action.TheWillowOfDarkness

    In what sense would they have ever taken the action? The point here is that predication needs to name the name to make some definite claim. Monkey with infinity and you are dealing with ambiguity.
  • Are proper names countable?
    I missed out a few key words. I meant that individual names would have infinite length and so you would have to wait an infinite time to discover whether the reference is to Jim...............my or his brother, Jim............mi.
  • Are proper names countable?
    An infinite number would be names of infinite length and thus require an infinite time to be actually said. So there’s that problem.
  • Epistemic Failure
    It's not a failure if what you require - semiotically - is a machinery of infinite potential reference coupled to constraint of semantic indifference.

    So epistemology wants these two complementary things.

    It wants a syntax that can generate endless variety. An alphabet of 26 letters can be used to generate every possible word and sentence. Four DNA bases can generate every possible protein molecule.

    Then that unlimited openness gets coupled to the thing that then epistemically closes it. There is the other thing of a semantics - a purpose to be served, a reason to care, a distinction that is actually worth marking, a difference that in fact makes a difference.

    So any sentence or protein could be produced. For a natural system, that is a huge freedom. And that semiotic possibility in turn allows a system of limitation which decides some statements or molecules are noise, or junk, while others are signal, or have intentional value.

    Epistemic closure thus becomes a reasonable choice. Rather than worrying that the world is "some totality of facts", facts become distinctions or individuations that could materially matter. Facts aren't definite in themselves in some realist fashion. They are simply what is "true" - or worth us knowing - to the degree that we have some reason to care.

    Facts thus are always intrinsically self-interested. While also being "about the world".

    It is this double-headed nature that often confuses. Realism vs idealism tries to make facticity all a thing of the world, or all a thing of the mind. But semiotics shows that "facts" are the signs by which we relate to the world. Indeed, epistemically, it is the relation that creates the self and its world.

    But anyway, the way it works is that syntax gives you your referential openness and semantics gives you your referential boundedness. Together, they compose an epistemic system.
  • Ontology: Possession and Expression
    The idea I believe is that the modes express substance and that substance is nothing other than its expression in the modes.StreetlightX

    This points the conversation in the right direction - towards an immanent, emergent, process view - but it doesn't really deal with the ontological issue of how modes get expressed and individuation becomes a substantial fact.

    One has to continue on all the way to a contextual or constraints-based view of causality, where it is the limitation on possibility that is the story of individuation. The essence of something is defined by the information that limits the variety of its fluctuations. And so then any individual thing is a hylomorphic mix of that limited state and then the further fluctuations which haven't been suppressed because they don't matter, given the general purpose or form in play.

    So the Aristotelian four causes/hylomorphic view of substantial being or individuation was pretty much correct after all. His logical talk picks out a hierarchy of predication. It sounds like he is talking about substances with properties. And indeed, that logical atomism is a pretty handy model of reality if you just want the quick and dirty reductionist approach. It does the job when you are living in a Cosmos that is mostly large and cold, and being is in fact in its most highly developed or concrete state.

    But behind that post-substance model of being - objects with properties - is the pre-substantial or developmental story. And here it is a case of top-down constraints in interaction with bottom-up degrees of freedom.

    Instead of individuation being some kind of simple expression - a reversion to monistic thinking - it is the complex product of a historical or contextual repression. Possibilities get constrained in some globally general fashion. And then particulars exist as differences that don't make a difference - to the purpose or form of the thing.

    A horse could be white, brown or black. From a species point of view, these are unconstrained genetic possibilities. Melanocytes offer these basic degrees of freedom. They are differences that don't make a difference when it comes to being able to breed. And so a property of being horse comes to include this range of hide colour as a matter of accidental expression. But a horse - as defined by the constraints of a genetic lineage, the information of an evolutionary history - is strictly limited in its likelihood of expressing other identities, like possum, or tiger, or goat.

    So Aristotle is great because two ontological models can be found in his metaphysics. There is the simple reductionist model that is atomism, where being is already substantial and possesses inherent properties. This is the upward construction or composition based ontology that normal predicate logic talk picks out.

    Then there is the other holistic or systems view of ontology that is the four causes/hylomorphic model. Substance develops into crisply individuated being by the imposition of downward constraints that limit the possibilities for the accidental. And essentially that is an open-ended view of being as a limitation of the accidental is not the elimination of the accidental. Indeed, this becomes an ontology where the accidental is also something that is ontically fundamental. As Peirce put it, the synechism or continuity of constraints is matched by the tychism or spontaneity of pure chance.

    A complete ontology of nature thus has two useful models. Reductionism and holism. We can understand reality in mereological terms as an atomistic collection of individuals, or instead as a complex developmental story of individuation.

    Again, the move from possession to expression is too simple a shift in emphasis. It only gets us to the first step of arguing for emergence. And that remains inherently mysterious as it does not explain why individuation should take some general mode.

    If the goal is to account for essence and accident in some fashion more sophisticated than subject and predicate, then you actually need to continue on to the full-blown holism of a constraints-based ontology.
  • Brain Food, Brain Fog
    And have you experienced diet-related “brain fog”?0 thru 9

    The article doesn't mention it but the link is now even stronger with the recent discoveries about the microbiome and gut-brain axis. In short, your diet needs to create a healthy gut bacterial ecosystem.

    There is symbiotic signalling between bacteria and gut. And the gut and brain also talking via the central nervous system. So beyond leaky gut and other mechanisms, this would be another big reason for brain fog. It would also be a reason to think twice about using antibiotics as well as changing your diet.
  • Stating the Truth
    But anyway, after a while the fun stopped . Its an addiction. The world got fuzzier and fuzzier and reduced to what I could make of it philosophically. To the point where major life events would be happening and I'd be only half-there, thinking about how I could analyze this and fit it into my philosophical preoccupations, or weaponize it argumentatively. Its not a good thingcsalisbury

    OK. I'm no therapist. But let's take this as the core complaint you are presenting with. And it is certainly a recognisable condition.

    Some of us are good at these kinds of argumentative skills. They come naturally. But they can put us at a distance from our own lives and societies. And maybe also we need a well defined subject matter to apply them to. Inquiry has to have some point so that it can move towards a definite end. All that energy ought to have a purpose so that it feels well connected to a goal of value.

    But before even worrying about that, an actual therapist would say check your mental health foundations. Maybe this rather intellectualised complaint - feeling troubled by a habit of being argumentative - will disappear as an issue if you first focus on getting the basics sorted.

    What does that mean? It starts with the body and physical condition. Strength training, good diet and sufficient sleep. If you are going to turn a new leaf by constructing a better set of habits, then hit the gym, understand nutrition and don't compromise on shut eye. These are all routines to build.

    There is a lot of new research to confirm this. Modern life is dreadful on these three scores. Your diet, for instance, affects your gut bacteria and that links straight to your brain and moods. Start feeling physically terrific and the other stuff starts to recede as anything to worry about.

    Then of course, after physical health comes the quality of your social relationships. Again, fixing these might require the building of new sets of habits. It might or might not be an issue for you. However again, I think it is true that you want to build from the ground up. If you are going to make a difference in terms of changing habits, fixing the basics could be 80 percent of the answer.

    After that would come what you do with your argumentative skills. Do you lock them away in the cupboard? Do you find them something useful to do?

    I agree. There is the problem of over-analysing life. It has its destructive possibilities. But also, for some of us it is what we are good at. Would we really want to give it away?

    I think the answers here would be highly individual. It would be more something for you to discover. Which is unlike the general therapeutic advice - the promise that you will get big and immediate returns from focusing on developing the routines for a healthy body and satisfying social engagement.

    Its a fucked up logic: I project onto other people the negative self-image I have of myself, I imagine them seeing me like that, so then I feel humiliated, and humiliate myself, then blame them for feeling the way I do.csalisbury

    Here this touches on how we understand the facts of the human condition. My argument is the familiar one from symbolic interactionism and positive psychology.

    You look to be talking about the mask we have to present to society so that we become predictable and interpretable beings within that society. We are actors in a running social drama. And so we must present the self that speaks to some intelligible role. Others read that mask and act according to its "truth". Both sides have to do the work that makes the mediating sign a correct framing of the self~other relation.

    Looking at it in this fashion should create a distanced third person view of what you are doing. You are describing the situation very personally. The mask you employ is a tactic to deal with an unease in social relations but then it traps you in the restricted space of actions it legitimates.

    Everyone struggles with this to some degree or other. My daughter sounds very similar in that she is super-empathetic and socially aware, which then rebounds on her because she judges herself the way everyone else in public ought to be judging her, when in reality most people barely even notice what you are doing except to the degree it might trouble their ability to predict your impact on their personal sphere of concern.

    I think it is striking how much people don't actually notice the elaborate "you" that you mean to project to the world. And if you are in fact operating from a sophisticated self-image, this is a good reason for feeling no one really gets you.

    One example from my own voyage of discovery. When I was 14, I got it into my head not to wear my school jersey because my mum didn't want to splash out on buying the "cool" school blazer. I also thought the rough wool was too scratchy.

    So then the winter term comes on. Ice on the puddles. Cold wind at the bus stop. But I'm still not bending and wearing that jersey. It is not that I've said any of this to anyone, let alone my mother. Outwardly, I am just a hardy kid not feeling the cold. But I've constructed a silent act of rebellion with no graceful exit to it - even if any morning I could have simply pulled on the jersey.

    So I go the whole winter like that. I'm talking an Auckland winter where its mild. A jersey would be enough. But still, I'm the only person in the entire school. Yet no one appears to notice the fact. Not even my group of friends. There is a comment or two - especially because on the worst days I have to actively demonstrate I'm not cold by standing out in the wind at the bus stop, not huddling by the wall with the rest. However it is a fact attracting no interest or concern. It only when the annual class photo has to be taken, and someone has to go borrow a jersey for me so I don't spoil the symmetry of the picture, that it finally gets any kind of official attention among my peers. Along the lines, "now you mention it, that was a bit odd."

    So it is an example of how most stuff washes over people. They are on the lookout for the easy to read signs with everyday meanings. I learnt that on the whole, you remain invisible when you think your weirdness is in plain sight. Most people have no need to analyse other people too closely. As a rule, nothing is ever a big a deal as you are going to think it would have to be.

    In all three scenarios tho I'm shutting off any form of actual emotion connection. I'm bracketting my emotional needs.csalisbury

    The three options would all have reasonably naturalistic explanations.

    1) Being weird and self-deprecating is to choose the social route of submitting. Abase before the group so as to be accepted on that score. And that is just evolutionary biology. Social species use signals of submission to allow them to fit into hierarchical social structures. So while you might see the strategy as some horrible personal mistake, it is also pretty natural in its logic. There is less reason to beat yourself up about it on that score.

    2) Being mocking and cynical is to display a more dominating posture. Again, the natural dynamic of hierarchical organisation demands this polarisation of roles. One must gracefully dominate, the other gracefully submit, so relations run smooth.

    Again it is deeply natural behaviour - logical in a systems sense. It is only with self-conscious humans that we would note ourselves falling into those contrasting roles and so ask the question of which one we truly are.

    3) Withdrawal is also a natural response. It is fairly hardwired and so not some weird choice you made.

    So you are accusing yourself of shutting off from emotional connection as if that were letting your better self down. But I think it is just social reality. We are tied to interacting through a system of social masks. That is just the way the game has to work - semiotically. There has to be some face we present that makes us part of a predictable and interpretable social environment. And then that has to continue the good old games of dominance and submission on which social organisation depends.

    So the conflict is between a romantic cultural ideal of how we should be - honest, true and naked in our interactions with each other - versus the evolutionary reality that we are creatures formed within semiotic systems that demand a natural hierarchical organisation.

    We can kick against this evolutionary determinism. But don't expect to defeat it. It exists for good natural reason.

    Depression, for me, is tightly wrapped up with a broader shame issue.csalisbury

    This is what good therapy could tackle. No doubt your life story could explain why shame would be a central issue. There would talk in your past that framed things for you in this way. And it would take talk to get that out in the open, examine it rationally, start to put in place the habits of counter-talk that you would employ to talk it back down whenever it arises.

    As long as we identify with anything as part of our "self", it is not going to change. If we can "other" it, then we can replace it as a set of framing habits.

    But again, this bit would be very particular to your personal story. And starting with exercise, diet, sleep and relationships is likely to be the most general answer to fixing depression.
  • Why Should People be Entitled to have Children?
    Having a child is imposing on someone.Andrew4Handel

    And so you jump straight into a justification ... based on the impact it would have on the unrestrained freedoms of others within the collective.

    So as I just argued, this is where any talk of rights does start. And as the conversation develops, we would expect some pragmatic balance between the individual and the collective to emerge. That is what it is all about.

    By your reasoning I should be able to kill people until you make and argument that convinces me not to.Andrew4Handel

    Show me how MY reasoning leads to that. :grin:
  • The snow is white on Mars
    It does reflect the logic of nature, which is different to the logic of machines.

    So nature - as quantum mechanics has confirmed in foundational fashion - is fundamentally spontaneous or indertministic. It is not actually deterministic but simply highly constrained in its habits. Circumstances limit the freedoms.

    At least that is my metaphysical argument here. Nature is not a machine. Accident and randomness are part of its inherent reality.

    And ordinary language simply follows suit for the same reason - it works. You need the duality of downward acting constraints and then the local freedoms which give the actions to be actually constrained.

    So the answer is complex because it both reflects nature at the ontological level. That is the logic of its self organising physicalism (despite our mechanical descriptions of its laws). And also, the brain/mind itself operates with this same natural logic. The brain is not a computer, a machine, but a modelling system seeking to impose informational constraints on the world’s entropic degrees of freedom. The brain is trying to make the world predictable by minimising it’s capacity to surprise.

    So language evolved as another level of that regulatory game. It became a collective social medium via which we could impose a regulatory structure on our shared experience and thus minimise the chances of being surprised.

    Snow is white. Understood as a constraint on our expectations, we then feel surprise - even alarm - when we encounter a patch of yellow snow. Likewise, we would be surprised if it melted to a gas rather than drinkable water.

    So ordinary language is set up with an organic or natural logic. The world is always going to be full of surprises. One can’t know everything, especially when nature itself is inherently indeterministic. A stressed beam is going to surely buckle, thin ice is definitely going to break. But exactly how and when is chaotically unpredictable. That is just way the world is.

    Ordinary language builds that fact in. It doesn’t rely on an artificial exactitude. It only has to constrain a state of belief to the degree that something is expected to be more or less the case. Then being wrong is usefully informative, not some disaster. The model of the world can be tweaked by either adding further constraints, or removing existing ones, to improve future performance.

    Snow is white, except where the huskies pee. Snow is frozen water, on earth at least.

    So ambiguity exists in nature and everyday language is functional because it models nature with that ambiguity in mind.

    Logicism is then the application of a purely mechanical notion of causality to the world. It is the language you would speak if the world had the causality of a deterministic machine.

    Of course, a mechanised view of nature has been terrifically useful in recent human history. The idea of absolute constraint is a powerful technological vision to impose on the world. There is a reason why we want to treat it as the “true” metaphysics.

    But in the end, nature is not actually mechanical. That is just a Platonic vision folk have found useful to impose on its inherent ambiguities.
  • Why Should People be Entitled to have Children?
    It is not clear to me why people have a right to have children and where that right would come from and how it would be justified.Andrew4Handel

    The most general “right” ought to be the right not to be constrained except to the degree that it is necessary. So the default position is “why not?”. We wouldn’t seek to impose restrictions on individuals (or individuals upon themselves) until we can supply the good reasons.

    You simply start on the wrong foot in asking why the right to have kids should exist. The first moral or practical question is why would we think to want to remove the open possibility. The burden is on you to make that positive argument.
  • The snow is white on Mars
    But if we had snow of different origins here (and we probably have, but I have no idea of their kinds), the word would be "naturally ambiguous" (like sand) rather than just, er, "philosophically ambiguous"Mariner

    Ordinary language use is ambiguous and thrives on that fact. Formal language is the attempt to remove ambiguity so as to provide the kind of certainty and absoluteness demanded by logic.

    Ambiguity can never actually be removed. It can only be constrained to an arbitrary degree. Definitions always remain “open to interpretation”.

    But formalism does apply generic syntactic constraints or rules to the expression of ideas. Being as opposed as possible to ambiguity is the key one, given the aim is a form of language that speaks about the true, definite, certain and absolute.

    So unambiguous speech is both a game that can never be won and also the goal to which logicism aspires.

    This irritating fact has launched a bazillion forum threads.
  • Physics and Intentionality
    As I have said many times, 'the law of the excluded middle' didn't come into existence with h. sapiens.Wayfarer

    The LEM really only "exists" as part of a system of thought - the three laws of thought, indeed. And even within logic - as Peirce pointed out - the LEM fails to apply to absolute generality. It's "existence" is parasitic on the principle of identity, or the "reality" of individuated particulars.

    If you do share the view that individuation is always contextual - the big theme of Buddhist metaphysics? - then you would likely be keener to stress the socially constructed aspect of the LEM, not its Platonic reality.

    The reason is, that viewing reason as an outcome of biology reduces it to a function of survival - which is the only criterion that "makes sense" from a biologists perspective.Wayfarer

    Well, the fact that "reason" had its genesis in evolutionary functionality doesn't really make it any less of a wonder how it has continued to evolve through human culture.

    One view is that if mechanical reasoning goes to its own evolutionary limit, that will be expressed in the coming Singularity - the triumph of the age of machine intelligence. So when it comes to the rational elegance of the algorithm, be careful of what you wish for. :)

    Of course, my own biosemiotic approach offers the argument against that. Intelligence remains something more organismal. But anyway, I think you are too quick to dismiss biology as "mere machinery" and so that is why you are always looking for something more significant about life.

    What I mean is, the same proposition, idea, formula, or whatever, can be represented in different symbolic systems, and in different media - digital, analog or even semaphore. I can't see anything confused about that.Wayfarer

    Again, I think the problem is in calling it "the one thing" as if it were an individuated object. That is where the conflation lies.

    A proposition is just some arrangement of words - a sequence of scribbles on a page. And a sentence is marked as starting at the capital letter, stopping at the fullstop. So using an understanding of correct punctuation, we can point to "an item" as if it were an intellectual object.

    But that is just pointing at the sign, the marks, the syntax! And what you are interested in is the semantics, the interpretation, the understandings the marks are meant to anchor.

    Which is where I say that is all about the contextual constraint of uncertainty. This is the opposite of a concrete object way of thinking.

    The meaning of a proposition isn't IN the words being used. Our understanding of what is being proposed is produced by the way we restrict our thoughts in some effective and functional fashion. Seen in a certain contextual light, the collection of marks could seem to stand for some state of affairs.

    Uncertainty remains. But what is for sure is how many alternative or contradictory readings we have managed to exclude. Most of the semantic work is about information reduction - how much of the world and its infinite possibilities you can manage to ignore.

    So the written or spoken words are quite concrete and definite objects of the physical world. You can point to them, record them, play them back later, translate them into any other equivalent code.

    But that is just the syntax - the signs formed to anchor your habits of interpretation.

    Interpretance itself is the semantic part of the equation. And it does not exist in the way of a concrete object but as an active state of constraint on uncertainty. It is not a item to be counted one by one. Every sign could have any number of interpretations, depending on what point of view you bring.

    Words tend towards limited interpretations because that is how a common language works. We need to learn the same interpretative habits so we can be largely as "one mind" within our culture. But constraint is what produces individuation. And it is a living pragmatic thing.

    This leads to the information theoretic definition of information as "mutual information", or other measures where it is the number of bits discarded, or possibilities that are suppressed, which creates semantic weight.

    I know a cat is a cat not just because it is cat-like in some Platonic generic sense, but because I am also so sure it isn't anything else in the possible universe. The number of other possibilities I've excluded add to the Bayesian conclusion that it can only be a cat.

    As usual, this is the way the brain actually functions. Attention gets focused on ideas by inhibiting every other competing possibility. And this is easy enough to demonstrate through experiment. Thinking about one thing makes the alternatives less accessible for a while.

    What I'm arguing is that while in each case the representation is physical, the capacity to understand and interpret the meaning of those signs can't be understood in physical terms. What is doing that, what has that capacity, is not itself physical.Wayfarer

    If science can see matter and information as two faces of the same physics, then why can't it understand even interpretation as a physical act?

    We know neurons are doing informational things when they fire. We know they are forming a living model of their world.

    Perhaps what we - at the general cultural level - lack is then a way to picture in our heads how informational modelling winds up "feeling like something".

    And yet the irony there is we are happy to picture little atoms bumping about and thinking we actually understand "material being" when doing that. Any physicists will say, stop right there. We really have no idea why matter should "be like matter". Sure, we have the equations that work to produce a modelled understanding in terms of numbers that will show up on dials. But we are still stuck at the level of the phenomenal - the umwelt of the scientist.

    So my approach is based on accepting that we are only ever going to be modelling - whether talking about matter or mind. The aim becomes to have a coherent physicalist account that is large enough to incorporate both in formal manner.
  • Physics and Intentionality
    Yes, and inescapably so, because we have two orthogonal (non-overlapping) concepts.Dfpolis

    But what does orthogonality itself mean? They are two non-overlapping directions branching from some common origin.

    So that is the secret here. If we track back from both directions - the informational and the material - we arrive at their fundamental hinge point.

    This is what physics is doing in its fundamental Planck-scale way. It is showing the hinge point at which informational constraint and material uncertainty begin their division. We can measure information and entropy as two sides of the one coin.

    As it happens, biophysics is now doing the same thing for life and mind. The physics of the quasi-classical nanoscale - at least in the special circumstance of "a watery world of watery temperature" - shows the same convergence between information and entropy for the chemically dissipative processes that make life possible.

    So this is the unification trick. Finding the scale at which information and entropy are freely inter-convertible. That is what then grounds both their separateness and ability to connect.

    Talking about their orthogonality is one thing. But talking about their connection has been the missing piece of the scientific puzzle. That lack of a physicalist explanation has been the source of the mind/body dilemma.

    But intellect and will do far more than "constrain material dissipation or instability." They have the power to actualize intelligibility and to make one of a number of equally possible alternatives actual while reducing the others to impossibility.Dfpolis

    What is constraint except the actualising of some concrete possibility via the suppression of all other alternatives?

    So intellect and will are just names that you give to the basic principle of informational or semiotic constraint once it has become internalised as some conceptualised selfhood in a highly complex social and biological organism.

    This is just backwards. Thought is temporally and logically prior to its linguistic expression. If this were not so, we would never have the experience of knowing what we mean, but not finding the right words to express itDfpolis

    The old canard. Sure, fully articulated thoughts take time to form. First comes some vague inkling of wanting to begin to express the germ of an idea - a point of view. Then - like all motor acts - the full expression has to take concrete shape by being passed along a hierarchy of increasingly specified motor areas. The motor image has to become fleshed out in all its exact detail - the precise timings of every muscle twitch, the advance warning of how it will even feel as it happens.

    And then when thinking in the privacy of our own heads, we don't actually need to speak out loud. Much of the intellectual work is already done as soon as we have that pre-motor stage of development. The inner voice may mumble - and stopping to listen to it can be key in seeing that what we meant to say was either pretty right or probably a bit wrong. Try again. But also we can skip the overt verbalisation if we are skating along from one general readiness to launch into a sentence to the next. Enough of the work gets done flicking across the starting points.

    So you want to make this a case of either/or. Either thought leads to speech or speech leads to thought.

    The neurobiology of this is in fact always far more complicated and entwined. But at the general level I am addressing the issue, Homo sapiens is all about the evolution of a new grammatical semiotic habit.

    Animals think in a wordless fashion. Then "thought" is utterly transformed in humans by this new trick of narratisation.

    If we only thought in terms of existing language, we would never need to coin new words.Dfpolis

    Huh? The point about constraints are they limit creative freedoms. But creative freedoms still fundamentally exist. The rules set up the game. Making up new rules or rule extensions can be part of that game.

    Semiotics - if you follow the Peircean model - is inherently an open story. It is all about recursion and thus hierarchical development. You can develop as much complexity or intricacy as the situation demands.

    And where does thought not expressed in marks or sounds fit into your theory? I have just shown its priority, but it finds no place in your model.Dfpolis

    In your dreams you have. :)

    Animals and neural nets can generalize by association. Forming associations is not abstracting. Generalization is a kind of unconscious inductionDfpolis

    Get it right. Generalisation is the induction from the particular to the general. For an associative network to achieve that, it has to develop a hierarchical structure.
  • Physics and Intentionality
    There is no reason a unified human person cannot act both intentionally and physically.Dfpolis

    But that is still a dualistic way of expressing it. The scientific question is how to actually model that functional unity ... which is based on some essential distinction between the informational and material aspects of being.

    For what it's worth, I say this has been answered in the life sciences by biosemiotics. Howard Pattee's epistemic cut and Stan Salthe's infodynamics are formal models of how information can constrain material dissipation or instability. We actually have physical theories about the mechanism which produces the functional unity.

    The example I gave in the OP was that of the transmission of a single item of information across different kinds of media - semaphore, morse code, and written text.Wayfarer

    But also, these are just different ways of spelling out some word. So the analysis has to wind up back at the question of how human speech functions as a constraint on conceptual uncertainty.

    Semiotics is about the interpretation of marks. So "information" in the widest sense is about both the interpretation and the marks together - the states of meaning that arise when anchored to some syntactical constraint. A definite physical mark - like a spoken word - is meant, by learnt habit, to constrain the open freedom of thought and experience to some particular state of interpretation.

    The Shannon thing is noting that this is what is going on and then boiling it down to discover the physical limit of syntactical constraint itself. So given that any semantics depends on material marks - meaningfulness couldn't exist except to the degree that possible interpretations are actually limited by something "solid" - Shannon asks what is the smallest possible definite physical mark. And the general answer is a bit.

    That analysis thus zeroes in on the point at which information and matter can physically connect. It arrives at the level of the mediating sign - the bit that stands between the world and its interpretation.

    So it is confused to talk about a "single item of information" being transmitted in different mediums. If these are all just different physical ways of saying the same thing, then it comes back to different ways of "uttering that word". It is thus "uttering words" that is the issue in hand. So how do words stand for ideas? Or rather - rejecting this representationalism - how do words function as signs? As physical marks, that can be intentionally expressed, how do they constrain states of conception to make them just about "some single item"?

    This 'extra ingredient' is itself reason, which is not explained by science, but which science relies on. It is nowadays almost universally assumed that science understands the origin of reason in evolutionary terms but in my view, this trivialises reason by reducing it to biologyWayfarer

    Biology ain't trivial. It is amazing complexity.

    But anyway, reason is explained by the evolution of grammar. The habit of making statements with a causal organisation - a subject/verb/object structure - imposes logical constraint on the forming of states of conception.

    Animals can abstract or generalise. That is what brains are already evolved to do. See the patterns that connect the instances. But with language and its syntactical form - one that embeds a generic cause and effect story of who did what to whom - humans developed a new way to constrain and organise the brain's conceptual abilities. We could learn to construct rational narratives that fit the world into some modelled chain of unfolding events.

    So psychological science can explain the evolution of reason. Animals already generalise. Language constrains that holistic form of conception to a linear or mechanical narrative. Life gets squeezed into chains of words. Eventually that mechanical or reductionist narrative form became completely expressed itself as the new habits of maths and logic. Grammar was generalised or abstracted itself. A neat culmination of a powerful new informational trick.
  • Stating the Truth
    Why did you say that hedonism is an illusion, but then suggest that structuring life in such-and-such manner gives the "right general mix" (presumably for living an enjoyable life)?darthbarracuda

    You can't just expect a "life of pleasure". It is personal growth and social connectedness that is what most folk actually report as rewarding. So right there, that includes meeting personal challenges and making various social sacrifices - the kinds of things you regard as part of the intolerable burden of existence.

    Epicurus et al have made it clear that directing one's efforts at obtaining pleasure is counter-productive. The seed of the pessimistic evaluation is already in this. Happiness is a byproduct of a struggle. Paradoxically we are most happy when we are not thinking about how happy we are.darthbarracuda

    Pfft. Hedonism is wrong minded as I said. You bloody well ought to be disappointed if you aim at it.

    As for byproducts and paradoxes, this is all still just your choice of framing - your resistance to the notion that reality might be in fact complex and not simple.

    Excitement isn't a fearful state of mind. The fight-or-flight response can only work if higher-level thinking is temporarily put on hold. You are not thinking about philosophy when running from a bear. It is fear that fuels the escape.darthbarracuda

    Check out the neurobiology of the sympathetic nervous system some time. Arousal is arousal. Why do you think people pay so much to ride roller coasters or bungee off bridges?

    And try giving a public lecture or doing a TV interview. Or playing a sports match in front of a crowd. You need to be shitting yourself with adrenaline to give a top performance - intellectually as well.

    The research of course shows a U curve of arousal. There is a case of too much as well as too little. But peak performance requires excitement/fear. Step on to the stage and your heart ought to be pounding as if you were running from that bear.

    People fear stupid stuff all the time - for example, I have a fear of miller moths. They are harmless creatures and I rationally understand this, but I nevertheless have an intense fear of them.darthbarracuda

    Have you ever tried to unlearn the reaction? Do you believe people simply can't?

    An organism with lethal thoughts is in a critical condition that jeopardizes its own survival. Fear sweeps in and suffocates the mind (ssshhhh), coaxing it into submission and back into the perimeter of "safe thoughts" where the organism is no longer a threat to itself. The mind is not the master here.darthbarracuda

    The fight/flight reflex is certainly usefully complex. It even includes a freeze mode. Just stopping paralysed can sometimes work as a last resort when an animal risks attack. So the circuitry to switch between modes of response exists.

    But why are we discussing the wild extremes of life threatening moments? How much do they have to do with the everyday routine? Why can't you frame your arguments in the neurobiology of the normal? What is wrong with taking the typical rather than the atypical as the ground of the discussion?

    You are pathologising your philosophy in short. You ought to examine why you have established that as your constant habit.

    This idea of the mind being the way the body enslaves itself features prominently in the work of Metzinger (meh), the horror of Lovecraft and Ligotti and the philosophy of Zapffedarthbarracuda

    Gawd, it must be true then. :roll:
  • Stating the Truth
    So the pendulum swings between painful discomfort to boredom discomfort.darthbarracuda

    Always one to look on the bright side, hey? :grin:

    I do a lot of strenuous and challenging things. If they actually hurt, I tend to stop. Likewise I enjoy the contrast of doing bugger all for extended periods. If that starts to feel uncomfortable, I tend to stop and find something challenging and strenuous.

    So my pendulum swings, as much as I can manage it, away from what I am ceasing to enjoy. Then because I accept that life has to be lived - hedonism is an illusion - the focus would be on structuring my life so that it gives me the right general mix of the two on a habitual basis.

    Fear/anxiety/panic literally suffocates the mind and prevents it from thinking. This is helpful to an organism's survival, such as during fight-or-flight situations where thinking is only going to slow the organism down.darthbarracuda

    Such rubbish neuroscience. What kind of thinking - rationalisation - do animals do? What is the difference between anxiety and excitement exactly? What is the point of confusing the confusion of the unprepared with the clarity of acting on well-developed habit?

    We are quite literally not allowed to think beyond a certain perimeter without anxiety immediately slamming us down and choking the thoughts out of us.darthbarracuda

    The brain is just so much more complicated and well-adapted than that. The response to moments of stress is not automatically a generalised panic attack. You are talking about what might be the eventual result of prolonged stress, not a normal healthy neurobiology as it was designed to function.

    I agree that a balanced lifestyle is recommended. But this also means a balance in terms of thinking. Too much thinking, too much seeing, will either kill or cripple you.darthbarracuda

    Yes. I was advocating a balance when it came to thinking. I think compartmentalisation in that regard - often seen as an unhealthy trait - is a useful trick to learn.

    Start by stopping those negative thoughts. What good does Pessimism actually do you except as a comforting rationalisation for remaining in "a near-perpetual state of controlled anxiety"?
  • Stating the Truth
    I'm just fucking sad man, I'm unhappy, I'm lonely.csalisbury

    I hear that and I'm sorry for it. And I don't expect to cure that with words here. The only insight I am offering is that the relief of that state has to recognise the complex situation we have collectively got ourselves into. I resolve it by compartmentalising life and not expecting to find myself in some kind of unified perfection.

    If your thing makes you happy all the more power.csalisbury

    I would question "happy" as a useful goal. Challenge, thrill, intensity, seem more at the heart of it. But all that against a backdrop of rest and control. We seek discomfort because we are too comfortable and comfort because we are too uncomfortable. As always, I would talk about what we can dynamically balance in practice. The natural goal of the mind is not to arrive at some fixed state but to maintain a state of adaptation in regards to the world.

    Again, that is picking up on the argument that the self is what emerges as contrast to "the world".

    But I have the suspicion that what makes me unhappy is this drive to harmony, even if its a weird syncopated harmony in disharmony. Im bored and tired of my thoughts. I'm especially bored of dialectics. Have you seen 'get out'? I feel like im half-anaesthetized in the 'sunken place' with some weird dialectical sidekick who argues on my behalf, while i lay unconscious and hurt. Sad & mad.csalisbury

    Again, no words will just fix you if they are just more rationalisation. But my view is that the psychology of this is that we are formed by our habits. And habits can be changed just as they can be learned.

    It sounds like you have clinical-strength depression. So as an established habit, this would be a neurobiological depth issue. And the conventional advice would be to start addressing the structure of your life to which it would be a state of adaptation. Positive psychology and other therapies can give you the tools for examining "the world" as you have imagined it, and to which your state would be an "adaptive" response.

    So again, I couldn't possibly diagnose you from a few posts. But the primary symptom we are discussing here is the habit of rationalising - imposing dialectical structure on "the world". There is this other self within you that isn't shutting off when you find all its efforts pretty meaningless.

    So what do you do? Do you stand back from that trained and educated aspect of your own personal history and label it as "not me". Just despise it as a wizened siamese twin. Or do you give it something to do, get it involved in some activities that seems useful and productive in a long-run fashion?

    Maybe you should unlearn the habit? Maybe you should find it useful employment? These do seem the two contrasting ways to go. And both would seem valid.

    What do you really think your situation is? That you can't be fixed or that you resist being fixed? Once you take on the identity of "the broken" then of course you don't actually desire the change that would be a change to that state of habit. And to the degree that you view a life to be perfectable - just happy in some untroubled and thoughtless fashion - you are going to argue that the goal is impossible anyway.

    Habits are learnt by the accumulation of many tiny barely noticed steps. Habits can only be changed by the same thing. So a question is: do you know from experience the skill of changing a habit? Is that where you could use help and techniques.

    Then the other question is what is the best we can expect? I think feeling adapted - properly embedded in a context, but also with sufficient creative freedoms - does it for most people for natural reasons. I think it helped me that I did compartmentalise my selves to a fair extent into their physical, social and rational modes. I pay enough attention to keep all three plates spinning.

    As I am arguing, they can't be "well-integrated" because they are three spheres of being. They each need to be lived by their own lights to a reasonable extent.

    But if the OP is about the particular symptom of an over-powering habit of rationalisation, which seems mired in meaningless rumination, then you do stand at a crossroads from which you need to shift. Either unlearn the habit as it exists for you, or give it something meaningful to do. Those seem the obvious answers.

    So how would your day-to-day be different doing either of those things? What have other people actually done? That seems a useful conversation to have.
  • Stating the Truth
    Can you actually go any deeper, or is what seems to be a deeper layer actually just another illusion?Sapientia

    It is going to be appearances all the way down. But why talk of it as being just a series of illusions? I find it more accurate to see it as also a hierarchical series of selves.

    So there is the everyday biological "me" that sees the colours. I see the same shade of grey because it is useful to make automatic visual compensations that "make sense" of the image as if it were a real set of surfaces of an object placed out in the sun and crossed by shadows. I am at that moment the kind of self who is seeking to understand the world in terms of an intelligible collection of physical objects. So I want to "see through" all accidental features of the occasion - facts about where the sun is and how the shade falls - so as to get right through to the most meaningful state of interpretance, the one where I am acting self-interestedly in a world composed of physical objects.

    But then there can be other "me's" layered on top, adding further "world making". Language and culture produce the social me that reads the environment in terms of all its rules and customs. I relate to that structure - and in relating, become that type of self, that kind of point of view. I see that it is true that I am driving on the correct side of the road because there is a dotted white line to my right. I see I have done something wrong because someone is scowling at me. These are all social facts that are "true for me", and in being so, are constructing the "me" that would hold them as truths.

    Then we can kick it up another level to the scientific me and the truths at that level of being. Again, the facts of a scientific viewpoint are merely a further configuration of appearances. They boil down to numbers on a dial. The colour picker informs me that the pixels at a set of coordinates is RGB( 126,126,126 ). And my scientifically-minded self accepts the objective truth of that.

    So as I replied to @Banno, truth always boils down to a point of view. There is some "us" that informs the relation with the known world. It has its needs and reasons. And then it forms an idea of the shape that facts will take. What it experiences is some "appearance" - or rather less dismissively, an Umwelt.

    An Unwelt is more than mere appearance as it is in fact an image of the world with us in it. There is no dualistic "us" that pre-exists its perceptions or truths. It is in coming to this state of interpretance, this particular habit of sense-making, that also forms the "us" that is the anchor for some definite point of view in regard to "a world".

    This is the properly deflationary route to a theory of truth. Pragmatism results in Unwelts. We emerge as habits of sense making - which is a positive thing as that constructs an "us" that is acting on the world in some concrete fashion. It is not the negative thing of an endless hall of mirrors, a series of levels of illusion with no ultimate "truth of the world".

    Sam Beckett has a quote about a progressively constricting spiral ... Schelling (or maybe Zizek) uses the metaphor of some kind of trap or knot that gets tighter the more you struggle against it.

    It feels more like a very tense and nervous imperative to organize thought into some arrangement of leakproof compartments.
    csalisbury

    I get this. But as I am arguing, I understand the situation to be that we humans are now complex creatures composed of multiple levels of selfhood. We have as a minimum the three levels of being biological creatures with animal needs, social creatures dependent on a co-operative social structure, and rational creatures with a recently-developed interest in living a mechanistic, quantified, technological and mathematically-encoded lifestyle.

    So there are levels of self produced by each of these levels of world-making. And the fit can be a little rough, especially given the accelerating pace of development. Thus we have to work a bit to create any sensible kind of balance when we are all projects in rapid progress, maybe never to be finished.

    Is this where your meta-model of philosophy goes wrong, or goes right?

    My argument is that how we see the world makes us the person who we are. And I grant that I have to be three kinds of person, in effect. So I would argue against your demands that metaphysics, in particular, should be so totalising as to include the kind of selves that are "feeling" or "poetic". Those kinds of selves are more about the cultural and animal creatures that we are. And even then, my complaint is Romanticism over-entangled the cultural and the animal. There is an advantage in being able to compartmentalise a lived life so that we can express our animal, social and rational selves in a more separated fashion.

    The levels of selfhood that need to be constructed to be a complete modern person do have overlap. They do wash into each other. But also, paying attention to keeping them separate, defining their spheres of influence and their appropriate times of expression, can help create a balancing structure.

    The totalising mistake would be to expect some kind of perfect integration of the psyche - the kind that would express itself fully in philosophy by elevating the affective and the poetic to the sphere of the rational. Or as is more the case, attempt to pull the rational back down to "their level".

    It seems healthier to me to be able to compartmentalise to a degree. Balance is being able to switch between broad modes of self - animal, social and rational by turns, depending on the setting. The difficulties would arise when we try to identify as just the one self - the beast, the poet, the thinker - as if we ought be so centred and simple.

    It doesn't have to be easy or perfect. But it is our reality as modern humans. We have unleashed the scientific and technological forces that are constructing a new level of human selfhood and the world that self sees. And that does create a lived polarity, a structural conflict, between the subjective self (that sits nearest the animal pole) and the objective self (that is way over the end of rationality).

    But do we have to feel torn if we can construct the further super-self that sees that this is the game and the combination of selfhoods/world-makings that needs to be balanced within our psychologies?

    The first step would have to be accepting that levels of selfhood is not a bad thing. It is not a failure to be hierarchically organised or stratified in this fashion. We can escape the strangling grip of rationality by making it one of the three things we can do well, in the appropriate context.

    The mistake, in my view, is trying to identify yourself with just one of the three levels on which it has now become natural to live as a modern human. I like the idea of being able to exist as all three kinds of selves in a fairly full sense, while not getting too hung up on achieving a (rational, mechanical) degree of perfection on that score.
  • Stating the Truth
    The question is rhetorical, rhetorical because I do not think it answerable unless "conscious and unconscious phenomena" is restated in some way that makes more sense.tim wood

    Well OK. So here for example I would note that neurocognitive researchers don't actually talk much about conscious and unconscious. They talk about attentional and habitual, or voluntary and automatic.

    They don't find a mentalistic jargon useful. They employ concepts that can be cashed out in terms of neurocognitive mechanisms, or behavioural criteria. So psychology - to the degree it is scientific - does restate "conscious and unconscious phenomena" in ways that make more sense to scientific inquiry.

    I realize there are exceptions, and maybe I'm a half-century out of date, but that's why I asked.tim wood

    The psychology of the 1970s was indeed pretty dismal. Behaviourism had little to offer. Cognitive psychology was too wedded to computationalism. Neuroscience classes were run by the medical school and had little to say about functional architecture.

    But a lot has changed. Evolutionary and social psychology have become big. So too, functional anatomy. Psychology has been put on a decent biological and developmental footing.

    Maybe you are talking about psychology as therapy or something?
  • Stating the Truth
    I'm none the wiser about what you want to say here. My comments are based on pretty basic psychophysics and neurocognitive research. I would presume those would be the parts that work.
  • Did Descartes Do What We Think?
    I have not addressed it, but there are two kinds of knowledge we have been talking about.Dfpolis

    I'm still finding it very unclear what it is that you think you are arguing. But maybe it is this. Maybe you are making the contrast between the roles played by coherence and correspondence in theories of truth.

    So on the one hand, there is the certainty (and doubt) that results from some generalised state of coherent belief. We have a world view that seems to work in reliable fashion. We have a pragmatic set of interpretive habits that do a good enough job of understanding the world. This is what intelligibility feels like. The world is experienced as having a stable rational structure - where dogs are dogs, horses are horses, the house on the corner is still blue like the last time we saw it, and we aren't concerned about the possibility it may have been repainted or knocked down in the last few days.

    Then there is the converse thing of the particular correspondence of a belief to a state of affairs. We are talking now about some individuated fact, which could thus be true or false as a particular thing. Our general knowledge of the world can't tell us that directly. From general knowledge, that giant dog could be a tiny horse. It is a possible fact consistent with a general view. So now we have to go a step further and establish that fact as being one way or the other as a matter of "immediate actual intelligibility".

    So when talking about Descartes, he does seem to be claiming that every fact is merely a particular, and so suffers the challenge of correspondence. But he relies on an evil demon to pursue that line. And that increasingly becomes incoherent with that other aspect of our knowing - the one that relies on a generalised coherence.

    It is not such a stretch to argue that our perceptions could be dreams or hallucinations. You don't even need an evil demon for that to be true (according to the allowable possibilities of generalise coherence) some of the time. But for an evil demon to be universally the case - to the degree it can intrude on our thoughts and make us miscount the number of sides to a square every time we seek to establish that fact as a matter of perceptual correspondence - is a real stretch. It conflicts rather too violently with the rationality we find in knowledge as generalised correspondence.

    And in the end, an evil demon that could so completely deceive us on that level - in a totally generalised way - falls out of the picture. It becomes a difference that makes no difference. Life for us would remain the same despite it being "a grand illusion". The epistemology of generalised coherence would absorb Descartes's evil demon. As you say, Descartes is still left in his chamber, stuck in that reality. Doubt so complete leaves him back where he started.

    But when it comes to the history of ideas, it remains important to see beyond the naive realism of the kind of "unity" of mind and world you appeared to be pushing.

    Descartes and Kant stressed the problem of knowledge correspondence. In psychological terms, the mind only appears to represent the world. The world is merely an image. And that creates a troubling epistemic gap.

    But then Peirce and Pragmatism stressed the generalised coherence of belief. So now we have a triadic or hierarchical epistemology with a long-run temporal structure. As I argued, globalised coherence creates a general certainty about what even counts as actually possible or actually likely. Perception begins with a state of reasonable expectation. And then correspondence fits in as the particular facts that might then come into question.

    Is that house still blue? Well. let's go take another look. We would be surprised if it were not. Although it is quite possible it might have been repainted. Less likely we were simply mistaken in our memory.

    Within a framework of generalised belief, we can then entertain a doubt about any particular fact. But the degree of that doubt is then always pragmatically constrained. We kind of know what needs better checking and what is unlikely to be wrong.

    So knowledge of the world has this intelligible structure - generalised belief that occasions particularised doubting. A Bayesian brain, in other words.
  • Stating the Truth
    You think the whole of psychology is a failed science somehow? A bit sweeping.