apokrisis

  • What is Information?

    I think of information as a singular co-element of a substance. As the pattern or form describing a substance.Pop

    This would be the Aristotelean view of substance as in-formed material possibility - the doctrine of hylomorphism. And I agree that this is the correct way to look at it.

    But that then leads on to the epistemic cut and other stuff you appear to object to. It says that any instance of substantial being is an intersection between global constraints and localised meaningless action - Peirce's metaphysics of synechism and tychism. And this is very quantum. It says anything could be physically the case, but then becomes limited towards being some concrete event eventually by a prevailing context - some global informational structure that dictates the shape and destiny of a quantum system as a probabilistic wavefunction.

    So from Aristotle to Peirce to quantum physics - and on to the complexities of life and mind - there is a common thread here. Concrete existence is all about global constraints on local uncertainty. And you can then label one side as formal/final cause, the other as material/efficient cause, or synechism and tychism, or information and entropy, or holism and reductionism, or whatever else floats your intellectual boat.

    But all this is about epistemological tactics - the best way to divide reality into intelligible categories so we can appreciate both the way things are parts of wholes, and the way wholes are composed of parts.

    We are constructing a point of view which allows us to read structure into a Cosmos that is part all about global logical necessity and part all about local chaotic freedom.

    What you are doing is now trying to locate form in substance rather than seeing form as the external context placing limits on localised random fluctuations.

    That leads to the error of a panpsychic conflation. The global structure and the local potential never have to come together via an interaction that produces the third thing of the actualised substance. You are thinking that form inheres in the substance as an innate primal property. There is no contextuality to formed existence, there is only the brute fact of that existence with a form. And so consciousness can be another property of physical materials - just like materiality itself.

    But then - because we know that degrees of consciousness must have something to do with the complexities of neural circuits - you graft on an enthusiasm for IIT with its emphasis on patterns of relations. Now complicated consciousness can reflect that measurable density of "integrated information".

    The panpsychic position likes quantum theory, or electromagnetic theory, as much as information theory for the same reason. Wavefunctions and force fields can be treated as the deepest levels of substance - a view that seems to have greater scientific credibility since ideas about atomic matter and Newtonian forces became too obviously just the epistemic tactics they always were.

    So now the form inhering in the substance appears visible. A wavefunction or force field can represent a spread of textured surface rather than some featureless spherical pellet of matter. One can see a property that actually looks complex and so matches the ontic intuition of how such a property ought to look at the fundamental scale of being.

    But again, this is arguing from little pictures in the head. Substantial reality could be primal featureless spheres. Or it could be instead the complex texture of a collection of interactions that makes a network tracery that throbs with intrinsic meaningfulness and experience.

    Either way, the mistake is collapsing the holism of a systems view - one which sees substantial being arising from the contrasting intersection of global necessity and local spontaneity - into the usual reductionist metaphysics where substantial actuality, with its entities and properties, is the only thing that really exists. And so the only thing that explains - in brute non-explanatory fashion - why there is material being and mental being. And why they have to be two aspects of the one essence.

    Everything that exists, does so as an evolving self organizing system. Interaction is a constant. So it is clear that information enables the interactional organization of a system. What a system self organizes is information..Pop

    And here you are recruiting even the systems science view to your conflationist cause.

    So yes it is right that natural systems are dissipative structures that self organise via information (or negentropy) so as to further the entropification of the Cosmos. And indeed, systems science would stress this is information that is actually meaningful and at the start of the evolution of intelligent selfhood or autopoietic autonomy. It is not just information but semiosis or the construction of a pragmatic modelling relation between a self and a world.

    But you are taking all that sophisticated metaphysics and saying that this self-organising infodynamics schtick sounds complex. Mind is complex too. So let's collapse the model into the phenomena. Let's pretend that a pattern of information is not a construct of our models but already a form of instantiated being that therefore emanates mind as an inherent property.

    Let's take actual metaphysical and scientific holism and present it as if it is the next big thing in property-based reductionism.
  • The Unequivocal Triumph Of Neuroscience - On Consciousness

    You would not recognize non-physical evidence. The only such evidence is that of the intuitive or imaginative faculties. But such evidence cannot be inter-subjectively corroborated. So it can never be evidence in the "public" sense, but only evidence to the individual whose imagination or intuition tells them that there is something beyond the empirical reality of the shared world.Janus

    Phenomenology is socially constructed. It is a modelling exercise using language to externalise the internal in a socially pragmatic fashion.

    So what you claim to be the facts of two different realms - the public and the private - are instead a way to frame things in a way that there is this epistemic division ... that can then allow a further level of organismic regulation emerge.

    You have to construct the division to exploit the division.

    So you grow up in a culture which trades in an economy of personal wants and needs. You have all these "feelings" that give meaning, direction and purpose to you individual consciousness.

    If you say you are hungry or tired, those are socially-accepted descriptions of animistic states of mind - pretty much a summary of how you are feeling at a brainstem level about your current physiological state. It is a reflexive response with a clear biological utility. If you tell me you are in pain, I can understand what you mean and respond in some culturally approved fashion that is pragmatic.

    But words can also encode almost purely social level states of mind - descriptors like loyalty, alienation, love, the sublime. These are rooted in the public and intersubjective in being largely about the pragmatics of living as a social creature in the human world.

    You are no longer describing "private states of mind" reflexively generated at a hypothalamic or limbic level of the brain. You are describing ways of acting that are strongly under the voluntary attentional control of the cortex. The words - the emotion language - is talk about suitable ways of behaviour in a human social setting.

    Are you being brave or reckless when cliff-diving? What you feel privately - in brainstem fashion - is arousal and adrenaline, dread and expectation. And what you also feel is the social framing of your action. Are you being performatively a tough guy, or a dumb ass? That becomes a social judgement. Indeed a social judgement poised like a switch between its two binary interpretations.

    You can feel brave. That was how you framed it privately. And you can perhaps later re-frame it publicly, taking the third person view that what you "felt" was a moment of heedless recklessness.

    Or vice versa. Your first time off the cliff, it might be recklessness that you feel inside - that is how you frame the high brainstem arousal together with a cortical state of conflict, the voluntary attention process that has both the plan to jump, coupled to the difficulty of actually doing so. But afterwards, you can switch that to bravery. You can walk away as if the plunge was no big deal. Do it anytime, as that is the kind of guy you are.

    So this public/private distinction is semiotic. It is an epistemic cut both created and bridged. Language is the means of dividing a group into a collection of individuals ... who can then act with even more perfect group cohesion ... because acting as an autonomous individual is also now something quite definite.

    For animals, there is no such public/private distinction. Being altruistic vs being selfish, or being cooperative vs being competitive, are not "emotional choices" being culturally policed.

    But humans, with their language-structured minds and worlds, are all about this social economy of emotions, feelings and values. The public/private distinction becomes a super-important thing - the basis of the social model.

    It is only when we step up another level - to the numbers-based semiosis of science - that we can see that there is this "unconscious" social game going on.
  • How Does Language Map onto the World?

    Flicking through a bit more of Lawson, I also want to point out how selfhood - the “reality” of the first person point of view - is a product of the closure, the epistemic cut, that produces the self-interested view we then call “the real world”.

    So the ideal gets manufactured by the othering of the real. They become equally "real" as two sides of the same coin. In our consciousness – our semiotically-organised Umwelt or experiential state – we find a self appearing in interaction with its world. We experience a world that has a selfhood as its sturdy centre, giving everything its meaning.

    Reductionist metaphysics - which includes the dualised reductionism of Cartesianism and Panpsychism - makes a problem of this. The self is either everything or it is nothing. The self-referential natures of modelling is treated as a fundamental paradox – an acid of contradiction that eats away at all philosophical certainty.

    But a holistic metaphysics says the self-reference is how selves become real as actors or agents in the world. The mind's ability to close itself – to learn to ignore the world in the quite concrete way now modelled by Bayesian Brain neuroscience – is how a meaningful engagement with the world, the claimed essence of a "realist metaphysics", can in fact arise.

    Of course actual closure – picking up Lawson's principle theme – leads to solipsism. We might as well be living the confusion of a fevered dream.

    So pragmatism speaks to the dynamical balancing act of closure and openness. As epistemic systems, we want to become as closed as possible, but only so as to also be as open as possible in terms of what is actually then surprising, significant, or otherwise worth paying open-minded attention to.

    Science is set up in this fashion. Make a prediction. Look for the exception. Beef up the model. Go around this knowledge ratcheting loop another time.

    Brains do the same thing every moment of the day. The self sits on the side of well-managed predictability and reality – the phenomenal – is discovered by its degree of noumenal surprise. Harsh reality is what we least expected.

    Again, reductionism wants to reduce the complexity of a dichotomous relation to the simplicity of monistic choice. Either the ideal or the real has to be the fundamental case. Pragmatism says instead that the closure in terms of the self-centred view of reality is the feature that makes possible any growth of knowledge about the "truth" of the real world.

    Lawson goes off on the usual Romantic tangent of wanting to give art the role of exploring reality's openness. But that's a bit too Cartesian again.

    Science is set up as the relentless machine for mining the "truth" of reality. Science's problem is not that it ain't sufficiently open to having its theories confounded by surprises. It's problem lies in its failure to be holistic and realise the extent to which knowledge is an exercise that is making the human self as much as comprehending the world.

    Science by and large accepts the Cartesian division between itself and the humanities. It's understanding of causality is limited to material and efficient cause. Formal and final cause are treated as being beyond its pay grade.

    This lack of holism is why modern life seems a little shit. And any amount of art ain't going to fix it.
  • Nice little roundup of the state of consciousness studies

    So I’ll let others explain their own views as best they can,javra

    You did a splendid job of misrepresenting what biosemiosis claims. :up:

    Simply put, semiotics resolves the antique dilemma of realism vs idealism by inserting the epistemic cut of the “sign” between the world and its interpretation.

    That is the familiar epistemic first step.

    Then semiosis becomes also an ontology by pointing out life and mind instantiate this epistemology as their Bayesian modelling relation.

    No claims are made about pansemiosis in this. Life and mind are defined by instantiating a modelling relation within a world that has its own unmodelled reality.

    And then things get more interesting. Physics starts to discover that physics is more lively - it houses self-organising dissipative structure. Quantum mechanics makes this fundamental by tacking on statistical mechanics and introducing decoherence/holography.

    It gets a bit pansemiotic as there is somehow an “observer” baked into the physics. There is no model and no localised sign relation. But metaphorically there is interpretance - what quantum folk call contextuality. Dissipative structure has the kind of holism where every “wavefunction” collapse is read by us, as modellers, as a system of sign. The physical events that mark histories of interactions and destroy quantum information are “the cosmos measuring itself into ever more definite being”.

    So it is metaphorical. But better than the reductionst and atomistic metaphors we were using to account for the “weirdness” of the quantum realm.

    Then biosemiosis as a new science crystallised when Peirce’s introduction of a mediating sign as that which connected mind to world was replaced by Pattee’s introduction of a mediating switch.

    Life is founded on mechanical switches or ratchets which physically link the informational and entropic aspects of a living and mindful dissipative structure.

    Pattee had this crucial insight in the 1970s. But it wasn’t until the 1990s that enough of Peirce’s work had been recovered and understood well enough for Pattee to make the connection that his hierarchy theory and modelling relations approach was semiosis under another name. After going quiet for a few years - having fended of the arguments of myself among others - he suddenly emerged as a rebranded biosemiotician in a blaze of statement papers.

    Then roll forward a decade and the other shoe dropped in terms of biophysics showing how biology indeed exploits quantum effects so as to be able to create an organised metabolism using the information bound up in enzymes and other kinds of ‘molecular motors”. Pattee’s mechanical switches and ratchets.

    So biosemiosis makes contact with physical reality by that shift from the still rather nebulous idea of a sign to be read to the completely concrete story of switches to be flipped. Biology uses a mechanical interface to mediate between biological information and environmental entropy gradients. The combo is the system we call an organism with a metabolism.

    As my interests are more on the mind side than the life side, I am focusing on the higher levels of semiosis that are founded on this basic biological level of “energy capture”.
  • What is life?

    How to put it simply? I would say you are far too focused (like all AI enthusiasts) on the feat of replicating humans. But the semiotic logic here is that computation is about the amplification of human action. It is another level of cultural organisation that is emerging.

    So the issue is not can we make conscious machines. It is how will computational machinery expand or change humanity as an organism - take it to another natural level.

    It is still the case that there are huge fundamental hurdles to building a living and conscious machine. The argument about hardware stability is one. Another is about "data compression".

    To simulate protein folding - an NP-strength problem - takes a fantastic amount of computation just to get an uncertain approximation. But for life, the genes just have to stand back and let the folding physically happen. And this is a basic principle of biological "computation". At every step in the hierarchy of control, there is a simplification of the information required because the levels below are materially self-organising. (This is the hardware instability point seen from another angle.)

    So again, life and mind constantly shed information, which is why they are inherently efficient. But computation, being always dependent on simulation, needs to represent all the physics as information and can't erase any. So the data load just grows without end. And indeed, if it tries to represent actual dynamical criticality, infinite data is needed to represent the first step.

    Now of course any simulation can coarse grain the physics - introduce exactly the epistemic cut offs by which biology saves on the need to represent its own physics. But because there is no actual physics involved now, it is a human engineering decision about how to coarse grain. So the essential link between what the program does, and whether that is supported by the organisation that results in an underpinning physical flow, is severed. The coarse graining is imposed on a physical reality (the universal machine that is the computer hardware) and is not instead the dynamical outcome of some mass of chemistry and molecular structure which is a happy working arrangement that fits some minimum informational state of constraint.

    Anyway. Again the point is about just how far off and wrongly orientated the whole notion of building machine life and mind is when it is just some imagined confection of data without real life physics. What is basic to life and mind is that the relation is semiotic. Every bit of information is about the regulation of some bit of physics. But a simulation is the opposite. No part of the simulation is ever directly about the physics. Even if you hook the simulation up to the world - as with machine learning - the actual interface in terms of sensors is going to be engineered. There will be a camera that measures light intensities in terms of pixels. Already the essential intimate two-way connection between information and physics has been artificially cut. Camera sensors have no ability to learn or anticipate or forget. They are fixed hardware designed by an engineer.

    OK. Now the other side of the argument. We should forget dreams of replicating life and mind using computation. But computation can take human social and material organisation to some next level. That is the bit which has a natural evolutionary semiotic logic.

    So sure, ANNs may be the architecture which takes advantage of a more biological and semiotic architecture. You can start to get machine learning that is useful. But there is already an existing human system for that furrther level of information processing to colonise and amplify. So the story becomes about how that unfolds. In what way do we exploit the new technology - or find that it comes to harness and mould us?

    Agsin, this is why the sociology is important here. As individual people, we are already being shaped by the "technology" of language and the cultural level of regulation it enables. Humans are now shaped for radical physical instability - we have notions of freewill that means we could just "do anything right now" in a material sense. And that instability is then what social level constructs are based on. Social information can harness it to create globally adaptive states of coherent action. The more we can think for ourselves, the more we can completely commit to some collective team effort.

    And AI would just repeat this deal at a higher level. It would be unnatural for AI to try to recreate the life and mind that already exists. What would be the point? But computation is already transforming human cultural organisation radically.

    So it is simply unambitious to speculate about artificial life and mind. Instead - if we want to understand our future - it is all about the extended mentality that is going to result from adding a further level of semiosis to the human social system.

    Computation is just going to go with that natural evolutionary flow. But you are instead focused on the question of whether computation could, at least theoretically, swim against it.

    I am saying even if theoretically it could, that is fairly irrelevant. Pay attention to what nature is likely to dictate when it comes to the emergence of computation as a further expression of semiotic system-level regulation.

    [EDIT] To sum it up, what isn't energetically favoured by physics ain't likely to happen. So computation fires the imagination as a world without energetic constraints. But technology still has to exist in the physical world. And those constraints are what the evolution of computation will reflect in the long run.

    Humans may think they are perfectly free to invent the technology in whatever way they choose. But human society itself is an economic machine serving the greater purpose of the second law. We are entrained to physical causality in a way we barely appreciate but is completely natural.

    So there are strong technological arguments against AI and AL. But even stronger here is that the very idea of going against nature's natural flow is the reason why the simple minded notion of building conscious machines - more freewilled individual minds - ain't going to be the way the future happens.
  • Emergence is incoherent from physical to mental events

    Your questions are gibberish. So I'll leave it there if you are so unwilling to start with the biological simplicity of Pattee's epistemic cut before leaping straight back into the neuroscience. Get it figured out for life, then you can see how that lays the ground for mind.

    (I mean even just the way you call it the "split" rather than the "cut" tells me you aren't really bothered by precise thinking here.)
  • Neural Networks, Perception & Direct Realism

    The wiki article offers a version of direct realism that is indistinct from naive realism...creativesoul

    Yep. And that is the point. The OP certainly comes off as an exercise in naive realism. You can't both talk about a mediating psychological machinery and then claim that is literally "direct".

    If Marchesk intends direct realism to mean anti-representationalism, then that is something else in my book. I'm also strongly anti-representational in advocating an ecological or embodied approach to cognition.

    But I'm also anti-direct realism to the degree that this is an assumption that "nothing meaningful gets in the way of see the world as it actually is". My argument is that the modelling relation the mind wants with the world couldn't even have that as its goal. The mind is all about finding a way to see the self in the world.

    What we want to see is the world with ourselves right there in it. And that depends on a foundational level indirectness (cut and paste here my usual mention of Pattee's epistemic cut and the machinery of semiosis).

    So this is a philosophical point with high stakes, not a trivial one - especially if we might want to draw strong conclusions from experiments in machine pattern recognition as the OP hopes to do.

    There just cannot be a direct experience of the real world ... because we don't even have a direct connection to our real selves. Our experience of experience is mediated by learnt psychological structure.

    The brain models the world. And that modelling is in large part involves the creation of the self that can stand apart from the world so as to be acting in that world.

    To chew the food in our mouth, we must already have the idea that our tongue is not part of the mixture we want to be eating. That feat is only possible because of an exquisite neural machinery employing forward modelling.

    If "I" know as a sensory projection how my tongue is meant to feel in the next split second due to the motor commands "I" just gave, then my tongue can drop right out of the picture. It can get cancelled away, leaving just the experience of the food being chewed.

    So my tongue becomes invisible by becoming the part of the world that is "really me" and "acting exactly how I intended". The world is reduced to a collection of objects - perceptual affordances - by "myself" becoming its encompassing context.

    The directest experience of the world is the "flow state" where everything I want to happen just happens exactly as I want it. It was always like that on the tennis court. ;) The backhand would thread down the line as if I owned its world. Or if in fact it was going to miss by an inch, already I could feel that unpleasant fact in my arm and racquet strings.

    Which is another way to stress that the most "direct" seeming experience - to the level of flow - is as mediated as psychological machinery gets. It takes damn years of training to get close to imposing your will on the flight of a ball. You and the ball only become one to the degree you have developed a tennis-capable self which can experience even the ball's flight and landing quite viscerally.

    So direct realism, or even weak indirect realism, is doubly off the mark. The indirectness is both about the ability of the self to ignore the world (thus demonstrating its mastery over the world) and also the very creation of this self as the central fact of this world. Self and world are two sides of the same coin - the same psychological process that is mediating a modelling relation.
  • Is 'information' physical?

    If the fact of generalization itself constituted a knock-down argument that it, and hence the mind that generalizes, must be "immaterial" (even assuming that we knew what that even meant) then everyone who thought about it would be convinced by it and no one would be able to deny it.Janus

    Yep. This is the interesting point. But then that is why Wayfarer would at least be right about the relevance of the information theoretic turn in fundamental scientific ontology. An appropriate form of immateriality is being introduced in the notion of information.

    Science used to deal in the "laws of nature". Reality was some mass of atomic particulars. And yet for some reason, that material state of affairs was regulated by universal laws. It was all rather spooky.

    But now science is shifting to a more clearly constraints-based view of reality. Laws are emergent from states of information. We have new principles like holography and entropy driving the show. The regulation of nature is now something that arises immanently rather than being imposed transcendently. Newton required a law-giving God to explain the fact of their being universal physical rules. Now we can see how nature's law's might just develop, emerge, evolve.

    So this is a big metaphysical shift. But what is really going on?

    As I said, information represents the immaterial aspect of reality that always seems philosophically necessary. Matter alone can't cut it. We've known that since Plato hammered it home.

    But then neither are mind, or divine, much good as the other half of reality - the bit that does the constraining, or the forming and purposing. The mind is patently complex, not fundamentally simple. It claims to be free and open, not constrained and closed. It is all about a particular lived point of view and not universalised "view from nowhere".

    So our concept of mind as the immaterial half of the ontic equation just offers all the wrong properties. The divine is just the mind taken to another level - minding that is even more potentially capricious, unrestrained, the author of material and efficient causes as well as formal and final cause. Talk of God just collapses all the useful distinctions we were trying to build up and so winds up explaining nothing.

    Science - as the only place real metaphysics continues to get done - accepted that the maths of form does represent the immaterial part of the reality equation. This was the revolution wrought by Galileo, Keppler, and especially Newton.

    It started out as a mechanical notion of form - the computation of the mechanics of moving bodies and rippling waves. Then moved on to become focused on the maths of symmetries and symmetry-breakings. Also probability theory and statistical mechanics became central as descriptions of emergent patterns and the self-organisation of constraints. And of course, conceptions of space and time were expanded to include geometries that were non-Euclidian, conceptions of mechanics were expanded to include behaviours that were non-linear or feedback.

    So science was on a journey. It recognised that its metaphysics needed an immaterial aspect to balance the material one. It started out with mathematical forms that were transcendent - Newton style laws, Newton style dimensions - and has steadily worked towards a picture of reality where the maths is describing immanent self-organisation. The laws and dimensionality simply started to appear as regularities - self-organising attractors that governed dynamics quite directly.

    It became possible to see how matter could form rules to shape its own behaviour - even perhaps form the forms that actually produced "matter" in the first place. Particles became individuated events, localised excitations, persistent resonances.

    Then along comes information theory as the latest improvement on this trip from transcendent cause to immanent self-organisation. Reality still needs its immaterial aspect to explain its material aspect. But now science has a new maths that is suitable for describing and measuring reality in terms of actual "atoms of form".

    The materiality of the world is reduced to pretty much a nothing - just the vague hint of an action with a direction, a bare degree of freedom. And at that point where reality approaches its limit of dematerialised nothingness, it can become semiotically united with an immaterial notion of mathematical form coming the other way. The maths proving itself useful for describing reality was becoming steadily less immaterial and transcendent, or "spooky action at a distance". It was becoming steadily more material and immanent in that it talked about symmetry breakings and statistically probable approaches to limits.

    Now with information theory, you have the exact point (hopefully) where each of these realms - the dematerialising materiality and the steadily materialising formality - finally converge and become one. They translate. Pan-semiosis is achieved as the material description and the immaterial description are two ways of saying the same thing. The measure of one is the same unit for measuring the other. We can go back and forth across an epistemic cut that formally relates the two realms or aspects of being.

    This is a tremendous and historical achievement in metaphysics. It is stupendous that it is happening right now in our own lifetimes.

    Science of course is still going off in all directions in the scramble to finalise the details of a final theory of reality. But at the level of metaphysics, we can sit back and be entertained by the spectacular outlines of an understanding that is now coming in to dock.
  • Thoughts on Epistemology

    Uluru isn't what Ib]I[/b] say it is; it is what we say it is.Banno

    Same general semiotic principle. Language embeds the notion of the self that speaks with meaning. So cultures do form vocabularies to serve their pragmatic interests. And we become socially constructed as selves by participating correctly in that language game.

    You could check out GH Mead of symbolic interactionism fame here. He applied Peirce to early sociology. Or Lev Vygotsky for the Russian version.

    You seem to have built your view as a series of deductions from inside your self, or something like that;Banno

    I haven't built anything. It just pragmatist philosophy and social psychology as far as I'm concerned.

    So it is a position built from scientific observation of human society, human development and human psycholinguistics. So induction not deduction.

    but Wittgenstein is suggesting that one stop and look first, at what happens when language is used.Banno

    Strewth. How revolutionary. You mean like social psychology? Like symbolic interactionism or social constructionism?

    The self doing the speaking is as much a social construct as the language that self is using.Banno

    Did you say that or are you quoting me there? Honestly, I can't tell.

    Removing the Self from where Descartes had placed it in the middle of philosophy is one of the net things about Philosophical Investigations.Banno

    Well we've already been though how Ramsey whispered the secrets of pragmatism in Wittgenstein's lughole.

    As I say, Peirce was fixing Kant who was fixing Descartes. Wittgenstein is pretty irrelevant.

    There's this really nice old paper of how Kant's cognitivism was fixed by Peirce's semiotics - http://ecommons.luc.edu/cgi/viewcontent.cgi?article=1905&context=luc_theses

    From what you have said it would seem that the speaker can decide in one way or the other if the stone is part of Uluru or not. But that's not what I would say. It's not the speaker who makes such decisions, but the community being addressed. And what is being asked is not about the ontology of Uluru so much as the way we use parts of that sacred rock.Banno

    The speaker could take a view. The community could take a view. All that matters so far as a pragmatic view of truth goes is that each party would be forming some general theory about "sacred Uluru" and would see the stone in evidential terms. Either the stone will be ruled by identity-justified constraints, or the party in question would feel a justified indifference.

    So the threshold might be determined by something physical - like the size or the degree of attachment. Or the criteria could be anything. The person wanting to souvenir the stone might be a tribal magic man or a state authorised geologist with a permit in his pocket. All that matters is that there is a theory that covers the issue and there is a way to "tell the truth of the matter" as some act of measurement. Some attribute of the stone has to become a sign of whether it is imbued with this quality of sacredness or not.

    The key here is that there is a habit of interpretance in play. There is a belief. And then the world is understood in terms of the belief. The belief knows what kind of signs or acts of measurement fall within its scope.

    The stone is stony enough, or sacred enough, or whatever enough, to count as such. Or not, as the case may be.

    The radical psychological claim is then that all experience is like this. Semiosis doesn't just apply to language use, it applies to the basic neurobiology of experience, and even of course to biology in general.

    But then I don't have a clear idea of what this "cut" is - apparently between me and it, as if an individual could have a private language.Banno

    You could look it up. Just google Pattee and epistemic cut. Or von Neuman and self reproducing automata. Or Rosen and modelling relation. Or....

    You get the picture. Stop being such a lazy sod and make an effort. You might finally learn something. Imagine poor fated Ramsey whispering in your lughole too.

    I know this is misrepresenting you, Apo,Banno

    Well why not pull your finger out and do your research.

    How will you reply? What attitude will you adopt?Banno

    Always the psychodrama, Banno. You want to play the game of "pretend to respect me and I'll pretend to respect you." And worried you won't get that, you try to play the authority figure. You set yourself up as the judge of whether someone's behaviour conforms to some proper standard.

    Well bollocks to that as you know. If you want respect, make an argument that works. Stop pretending that you are somehow in control of how this goes.
  • Thoughts on Epistemology

    Well it is the central thing to a semiotic metaphysics. So yeah.

    A modelling relation with the world is based on the displacement that is the separation of the model from the world it seeks to regulate.

    The idea is terribly simple and familiar. The map is not the territory, etc. I get tired of all the pretence that this is something esoteric and not merely a precision description of the ontology involved.

    If you want a more technical framing, Pattee's slogan is that life is based on the dichotomy of rate independent information and rate dependent dynamics.

    But let's cut out all the feigned shock and horror. The epistemic cut is perfectly straight-forward.
  • Thoughts on Epistemology

    A trivial split, as opposed to your world-shattering epistemic cut.Banno

    Hmm. What is it that you don't get about the the cut which is the separation that founds the connection? :)

    A Peircean epistemology explains how a self is formed via a capacity to be indifferent to the world. As yet, you have made zero counter-argument. You just make these gurgling drowning noises.
  • Does Genotype Truly Determine Phenotype?

    That means there's no clear-cut definition of genes and ergo, genomes. Biology, unlike physics, appears to be more fluid. Perhaps the issue will be resolved once we define "gene" and "genome" in a better way.TheMadFool

    Biologists knew that already. Cracking the genome was the easy bit. The hard work starts with working out how the information regulates the physics.

    Yes, I was thinking along those lines, wondering whether multifunctional swiss knives qualify as an instance of complexity.TheMadFool

    A knife that is equally bad for every job? :chin:

    One problem though: it's generally believed that evolution evinces a progress from simplicity to complexity...TheMadFool

    Again, this is where I admire the crystal clarity of Howard Pattee. He identified the epistemic cut as the definition of life. So even the simplest RNA soup counts as already irreducibly complex. There is just nothing in the physics that explains what is going on anymore. You have to see how information has now entered the room.

    but if you take the idea of algorithmic complexity and apply it to the universe then, since the universe began, according to a science book, by fixing the value of just six numbers (referring to known physical constants), doesn't that mean the graph of complexity is showing a downward trend? After all there are more bits of information in our genome than in there are in just six numbers?TheMadFool

    The Big Bang was an ultimately simple state - a vanilla bath of boiling hot radiation. And the Heat Death will also have an ultimate simplicity - a vanilla bath of radiation so cold and thin that its just a rustle of zero degree photons.

    It's the bit in between where complexity arises via a series of symmetry breakings. The Higgs field switched on mass and suddenly there were all these sluggish particles cluttering up the vacuum. The vanilla radiation bath became a cosmic dust bowl.

    From there, it just got worse. The dust particles - hydrogen and helium atoms - clumped and caught fire. Stars emerged as fusion furnaces making the light elements. A reprocessing by super-novae then produced all the heavy elements to.

    Next came planets and even planetary biofilms - at least on Earth there is life.

    So yes, simplicity led to complexity. But simplicity gets to win in the long run.
  • The dark room problem

    It tries to oversimplify human behavior, which is wayyyy more complex,with a naive waydimosthenis9

    That is another misrepresentation.

    Although I agree that as a formalism, it doesn’t tackle the code side of the semiotic modelling relation. Friston’s Markov blanket is a general physical description of the epistemic cut. But it doesn’t talk about the “how” of the machinery that enables such a cut to actually be made in nature.

    There are four levels of such code I would identify. Genes, neurons, words and numbers. Each produce their “worlds” or Umwelt. Only humans have verbal and mathematically constructed Umwelts or world-models.

    So yes, there is one general story to be had - a semiotic theory of everything. That is implied in Friston’s approach, but not mathematically expressed in direct fashion. The fact that there needs to be a machinery of semiosis - some system of encoding - is implied by the Markov blanket formalism, but not to be found in that formalism.

    And as you protest, humans are more complex. We have words and numbers that lift us beyond the semiotics of neurons and genes. We have social semiosis and techno semiosis. Friston’s free energy principle was directed primarily at the problem of neurosemiosis, and has been expanded to - sort of - include biosemiosis.

    So I have plenty of “criticisms” of Bayesian mechanics. But I think it helps to have actually understood what Bayesian mechanics might claim.
  • Metaphysics as Selection Procedure

    I didn't understand your suggestion that my asking after the ontological status of your model meant that I was thinking in mechanistic terms. I still don't.csalisbury

    Your line of attack was "your own model is clenched and curled up super tight brooking only those findings and ideas which will reinforce".

    My reply was that my model is like that only in the sense of a seed waiting to unfurl. So it is in fact a recursively open-ended and hierarchically generative model - a properly organic one.

    Mine is a semiotic approach that is based on the search for a core symmetry breaking process. And this core process has been identified by a series of key writers - starting with Anaximander and his notion of apokrisis or "separating out". :)

    In modern times, Peirce's semiotic, Rosen's modelling relation, Pattee's epistemic cut, and Salthe's basic triadic system, are all even sharper approaches to an answer based on the understanding that reality is a product of "matter and sign".

    So my claim is that semiotic metaphysics is the "true" model of organic causality. And then that this model is best understood in terms of its "other", which is going to be the standard issue lumpen materialism that can be generally classified as "classical mechanics".

    The mechanical view of causality revolves around a familiar family of principles (and their "others), namely reductionism (vs holism), determinism (vs contingency), monadism (vs anti-totalising), locality (vs quantum nonlocality), atomism (vs continuity).

    So what I said was that in your attempts to criticise me, you tried to use the notions of mechanical discourse to show me as "other" to what you implicitly hold to be "the correct position". And I replied by pointing out that that only shows you are wedded to that mechanical discourse. You rely on its "truth" to ground your "truth". But to deal with my position, you would have to appreciate how it stands quite outside this little 18th century romanticism vs enlightenment spat you might be imagining.

    And I'm still curious what your theory of truth is. Or if you even care about that kind of thing? and, if not, why not?csalisbury

    How can you still be curious, honestly? What more do I need to say except Peircean Pragmatism? Or Rosen's modelling relations?

    Truth is a triadic sign relation. It is a process of constraining uncertainty using semiosis.

    The biggest problem I have with this explanation is that it's not really true - you constantly use 'crisp' and 'rigorous' and 'mathematical' to refer to non-mathematical neat dichotomies, as with that true detective analysis way back when.csalisbury

    True Detective turned out to be shit as philosophy, so I don't even remember whatever it was that has got your goat here.

    And note that "crisp" is a technical term that a biosemiotician would oppose to "vague". So it has a particular communal meaning. Although I like it because it is also quite a self-explanatory everyday language term.

    So when I use "crisp", I do mean it "mathematically". That is I am defining it dichotomistically as the "other" of "vague". And thus formally, I am saying crisp = 1/vague - the relation being the reciprocal or inverse operation that is a dichotomy.

    In case you don't follow that, crisp = 1/vague means that crispness is defined as being the least possible amount of the vague. An infinitesimal quantity. Or the furthest possible countable distance away.

    odd. 'dialectic' is certainly not a 'crisp formal mathematical concept.'csalisbury

    Perhaps you see by now that it can be?

    There's this thing you have with 'crisp' - which is very interesting. I mean it's interesting that the word you use most, and seem to find immense satisfaction in, is not itself any more 'crisp, formal, mathematical' than 'selection' or 'hinge.'

    Do you find that interesting? What do you think about it? It seems interesting right!
    csalisbury

    I'm guessing you might be feeling increasingly embarrassed at your half-arsed taunts by now.
  • "What is truth? said jesting Pilate; and would not stay for an answer."

    think apokrisis is decently close to this as wellfdrake

    Thanks. :up:

    My point is all about bringing logic back into the real world by showing how it is in fact grounded in the brute reality of a pragmatic modelling relation.

    The mystery of logic, truth, intentionality, etc, are that they are clearly in one sense free inventions of the human mind. They transcend the physical reality they then control. This is a puzzle that leads to idealism - including the idealising of logic as "just a free mathematical construction, which also seems to have a Platonic necessity about its axiomatic basis".

    But semiotics makes it clear that this idealistic freedom is the result of the "epistemic cut" in which a code – some vocabulary of symbols that can be ordered by syntatic rules – is then able to "speak about reality" from outside that reality.

    The word "possum" could mean anything. As physics – a sound wave, a reverberation, emitted by a vocal tract – it is just a meaningless noise. And a noise that is costless to produce. Or at least the metabolic cost is the same as what any other noise of a few syllables might cost us.

    The physics of the world thus does not constrain the noises we make in any way. And that is why these noises can come to have their own idealistic world of meaning attached to them. We are free to do what the heck we like with these noises. We can create systems of rules – grammars and syntax – that formalise them into structures that bear meaning only for "us".

    So idealism is made to be something that actually exists in the physical world because this world can't prevent costless noise patterns being assigned reality-independent meaning. Noise can be turned into information and there ain't a damn thing the world can do about that transcendent act of rebellion against its relentless entropy.

    But then humans have to still live in the world and do enough to cover the actual small cost of speaking about the world in the free way that they do.

    So the freedom of truth-making is in fact yoked to the profit that can be turned on being a speaking creature. It all has to reconnect to the physics. And of course, as human history shows, being speaking creatures living in shared communities of thought, in fact can repay an enormous negentropic dividend.

    To the extent our model of the world is "true" – pragmatically useful – we gain power over the entropy flows of our environments and can bend them to our collective will.

    The problem in the discussions here are that logic gets treated as something actually transcendent of this rooted, enactive, organismic reality of ours. But even logic – and information in general – is finding itself becoming properly reconnected in physics.

    Turing invents universal computation? Computer science eventually matches that by showing reality has its fundamental computational limits. The holographic principle tells us any computation does have some Planck scale cost – a cost which is small, but not actually zero. And so try to build a computer with enough complexity to tackle really intensive problems and it would shrivel up into a black hole under its own gravity.

    Information theory puts computing back firmly in the world it thought it had transcended.

    And the same ought to be happening for logic.

    Which is where semiotics comes in. It defines the line between rate independent information and rate dependent dynamics in a way that is biological rather than merely computational.

    Logic as maths led to computers as logic engines. Blind hardware enslaved by blind software, with the human element – the intentionality and truth-making – once more floating off above the heads of all the physical action in some idealist heaven.

    Semiotic approaches to truth-making discovers logic to lie in the way that the connection between models and their realities is reliant on the device of the mechanical switch. This is the fundamental grain of action because it is where the effort of executing an intent becomes symmetric with halting that intent. And so that intent becomes a free choice.

    You can flick the light on or off. You can push the nuclear doomsday button or leave it alone. The greatest asymmetry between a choice and its result can be imposed on nature by making the metabolic cost of choosing option A over option B as entropically symmetric as possible.

    To me, putting this modelling relation front and centre of the philosophy of logic would clear up the old truth-maker chestnuts forever. We could move on to more interesting things.

    Mechanistic logic has confused people's metaphysics for quite a long time now. Roll on organismic logic. Let's reconnect to the systems view of reality that has been chuntering along in the background ever since Anaximander. Let's finally understand what Peirce was on about as he laid its general foundation.
  • Nice little roundup of the state of consciousness studies

    (I think Apokrisis would probably disagree but I'll leave that to him)Wayfarer

    It is much more prosaic than that. Barbieri wanted to be the big cheese with his ribosome theory. Pattee was over-shadowing him and the rest by arriving late, and endorsing Peirce over Saussure.

    So he left in a dramatic huff to re-establish his own code biology brand. As it happens, he backed the right horse in the ribosome. That has indeed moved centre stage of abiogenesis in my view. And the ribosome is a very “Peircean” structure, a very convincing tale of how the epistemic cut could have first arisen in practice.

    Arran Gare did a social history of the Barbieri affair - https://philarchive.org/rec/GARBAC-4
  • Reviews of new book, Neo-Aristotelian Perspectives in the Natural Sciences

    As to holism, I find this:
    "the theory that parts of a whole are in intimate interconnection, such that they cannot exist independently of the whole, or cannot be understood without reference to the whole, which is thus regarded as greater than the sum of its parts."

    If you accept this, then can you explain to me what "cannot exist independently of the whole," and "is thus regarded as greater than the sum of its parts" mean?

    By "in intimate interconnection," I assume that means in terms of the function of the whole, if the whole has a function. The valves are "intimately interconnected" to the crankshaft in terms of the overall functioning of the engine, but they had better not ever touch!

    And from the engine. I can remove parts and put them over there. They exist independently over there, yes?
    tim wood

    You seemed to think there was still something to address here?

    Well let's talk about organisms rather than machines. Can an organ exist without a body, and a body without its organs?

    Can a heart have an independent existence - one that never involved the context of being part of an organism which needed it for the purpose of pumping blood. Or is there in fact an intimate interconnection, a co-dependent relation, that speaks to the wholeness of the biological causality?

    So as I said, it is nuts to talk about proving existence is a machine because you can prove a machine is a machine. A machine is a device built in the very image of reductionist modelling. It works because all the causal holism has been stripped out of the situation.

    That is why machines have to be built. They can't grow. They don't get to decide their own use or design. Quite deliberately, there is a lack of any intimate immanent interconnection between their material/efficient causes of being and formal/final causes of being. Because we, as the human builders of a machine, want to supply that part of the causal equation. It is we who have the purposes and the blueprints.

    And so the realm of machinery is a special reduced kind of world we create by constraining the usual holism of nature. An internal combustion engine is a controlled explosion directed at regular intervals through a system of pistons, cylinders, cranks and gears. We make sure all the parts are machined from sturdy metal, that the petrol/air mix is just right, that the timing of the explosion is precise.

    In short, we do everything possible to reduce it to a mere assemblage of independent parts. And that is why the human mind - with its ideas about purposes and designs - becomes its own culturally independent thing.

    As a species, as an organism, we have been transferring a large part of our being into our technology. It started just with cooking, spears and hammers. Now its iPhones and space shuttles. And in splitting off the material/efficient causes of being into a realm of machinery, that has increasingly freed us to be purely intellectual beings - organisms that are now largely devoted to supplying the other half of the causal equation, the purposes and the plans.

    So there is a nice little irony there for @Wayfarer's OP. The mathematical turn in Greek thought was all about fabricating the conditions of organismic transcendence.

    We could become the gods of technological creation as maths was the basis of a new epistemic cut in nature. We could split our organismic nature in half, turn to technology as the amplifier of our material/efficient causes of being, and then in matching fashion, become amplified in terms of our scope to have grand purposes and grand designs. We transcended our biology and even sociology to the degree we made it possible to dwell in a technologically-based paradise of ideas.

    So we rewrote the rules of organic holism. Or at least took it to the next semiotic level by discovering the power of mathematical/logical language - a generalised syntax or grammar now completely washed clean of any intrinsic semantics.

    Again note. Language itself was made mechanical - logical, computational, a composition of atomistic parts with no holistic entanglements. So no wonder that the reductionist mindset - the one that tries to view every situation as another machine - has become so ingrained it can no longer even be noticed as a mindset.

    We no longer think in the social language of words - the everyday speech that still reflects the structure of intimate interconnections and interdependencies with our other tribe mates. With a standard modern education, we are trained to be as mechanical as possible in our critical thinking skills. When asked any big questions, it seems the only right way to go. Does this compute ... in the machine-like fashion that is the standard issue model of physical reality now?

    So again the ironies. To the degree we have founded ourself in mechanism, we have liberated ourselves to be gods or free spirits of the world in which we live. We have achieved Cartesian dualism as an act of self-made causal division. And that then has become a standard source of philosophical angst.

    Are we just enlightened machines, or souls existentially trapped inside fleshbots? Which of the two things are we really - a construction of material/efficient cause, or an expression of transcendent formal/final cause? In fact, we are just living a thoroughly divided life that has been amplifying both aspects of our organismic being exponentially. We are being stretched in opposite directions having stumbled into the means to do so - that Greek turn, the development of pure syntax, the development of a mathematical/logical point of view which can Platonically split our world.

    Now that again is why we really, really need neo-Aristotelianism today. We have to accept that all four causes compose any functional system. That has to be our philosophical frame of analysis if we really want to understand "everything".

    Most folk are stuck with the conflicted image of Platonic dualism. The world is an unthinking machine. We are rational souls. So metaphysics basically can't make sense of how things are. Caught in this paradox, people fall to bickering about whether everything is in fact all mechanical, or all spiritual. Every thread on this forum goes down that gurgler. It is just the way modern culture leaves people.

    And that is why it would be wonderful if more people understood holism properly. It is certainly true that to be a modern human is to be divided between the material possibilities of a mechanised existence, and the intellectual possibilities of a free imagined existence. We have made our lives as Cartesian as possible. But that is really weird when you think about it. Holism would be the way to turn that around and see the further possibilities for a psychic integration of that divided self.

    Well, let's not exaggerate. Most people have zero interest in philosophy and do live rather unanalysed lives. They are social organisms, responding to their immediate cultural contexts, and probably all the happier for it. The contradictions are not felt because they just don't believe that other people are merely machines, nor in fact transcendent beings. They are simply other people and the ordinary embodied games of language apply. No need to introduce any mathematical abstractions into this equation and thus set up some further metaphysical drama.

    But once you are exercised by the division that is forming our modern intellectual condition, then you ought to be pleased that there is a way to heal it - neo-Aristotelianism, or any other of the many brand-names for a holistic, four causes, understanding of metaphysics.

    It settles the old differences while opening up new intellectual horizons. Human anthropology is about the most trivial and easily disposed of issue. It is how holism applies to physics and cosmology that would be cutting edge. Or to life and mind in some properly structural sense. Now we are talking about the new adventures that science has embarked upon.
  • "Chance" in Evolutionary Theory

    The point about chance in biology is that it is something life has to mechanically manufacture because it doesn't really exist in nature.

    Now that is a confronting way to put it perhaps. But consider the parallel with a tossed coin or rolled die.

    As humans, we can imagine this Platonic thing of pure or crisp chance. Following the laws of thought, we can imagine reality being divided in a digital or binary fashion into a definite set of possibilities that then either definitely happen or definitely don't happen.

    And then we can produce physical models of such absolute chance. We can really go to town to machine a flat disk so that it has an insignificant degree of asymmetry to bias any fair toss. We can really go to town to produce a perfect cube, with rounded corners, so that it to will have only inconsequential levels of bias when rolled on a flat surface.

    So the physical world is analog. But we can make digital devices. Or at least we can approach our Platonic notion of absolute chance so closely as not to make any practical difference, given our purposes - which can be using chance to gamble, or chance to decide who serves first, or whatever.

    So the point is that a world with digital perfection of this kind - a perfect symmetry of an outcome-generating process that removes any predictability from some assignable cause - does not exist normally in the world. It has to be made. And to get made implies someone with an interest in that happening. It is already a purposeful act to arrange reality so as to produce chancy outcomes.

    We think of natural systems being intrinsically chancy. So a tornado could take any path, a thunderstorm could pop up anywhere. But this is vague chance, or analog chance. Yes, there is unpredictability, But it is just as mixed with inevitability. In hindsight, the thunderstorm had to happen the way it did because so many confluent events panned out that way. However there is not the sharp binary consequence that is taking one path and not another. Instead there was an infinity of trajectories - and most of them were bunched together in the way described by a chaotic attractor. So you have this muddy form of chance, this analog chance, where generally things pan out in a certain direction, and the finer detail of what happens doesn't make much difference.

    With life however, it was all about sharpening up muddy chance into sharp chance. The genetic mechanism separated aspects of structure so they became discrete traits. You could take bits of the whole and ask whether going in direction A or B was the better binary choice.

    So life always was about the evolution of evolvability. Life arose out of the analog organic soup by being able to pose digitally crisp questions. Intelligibility in a logical sense was the big move.

    And its more than just about DNA. Bacteria have unfocused sexual lives. They can share genes at any time across different species. But multicellular life developed a more binary approach to sex. You eventually get individual acts of breeding where sharp mating choices are being made. It now becomes an either/or fact of history whether A mated with B, rather than C, D or E.

    So a simplistic ontology of life does stress that what is different is that evolution is ruled by chance. It is a story of the blind watchmaker and cosmic contingency. But this is a view of chance that already presumes a digital physics - a world where absolute determinism rules, and so chance is defined in terms of there naturally always being absolute crispness about what did happen vs what didn't happen in a material sense.

    But a more organic conception of reality sees it as analog or muddy when it comes to its variety. Nothing actually starts in sharp distinction. Distinctions or individuations are things that have to be developed. And to varying degrees, material individuation can arise of its own accord due to contextual factors. Yet it all remains entangled or unseparated in some degree too. A bit soupy.

    Life then came along and imposed a Platonic digital rigour on this soupy organic possibilty. It framed the chemistry with cell walls, enzyme rate knobs, molecular motors, receptor pores and all other kinds of digital devices. The chemistry was organised by a tight set of yes/no paths and switches.

    So developmentally, chemistry became informationally regulated. And as the flip side of this coin, the regulating information was made exposed to blind evolutionary selection. Ways were found to put as much of this digital machinery on show, out in the world for natural selection to play its part, as made sense, given the purpose of wanting adaptive plasticity to go along with the adaptive stability.

    So chance - as we digitally conceive of it in its Platonically-ideal splendour - is something that life has become good at manufacturing as it is so useful. Just as life has become good at manufacturing its opposite - a regulated, homeostatic, stability. The kind of purposeful state in which strong determinism appears to rule rather than strong chance.

    This is a view of the Universe that can't be seen from a classical Newtonian perspective. But it is the thermodynamic view of a Universe that is mostly a vague entropic mess spreading and cooling its way downhill to a heat death. And out of this sludge, life arises by a negentropic dichotomy. It divides the sludge into a more regulated aspect, and a more chancy aspect. It creates a new, more mechanical, level of self-interaction that makes the sludge both more self-organised, and less self-directed, than was the case.

    So the "paradox" is that life seems both more purposeful and more chancy than the world it arises in. For monistic thinkers, this creates a deep problem. Life as a phenomenon ought to be reduced to one of these two ontic categories - necessary or contingent, determined or random, cosmically inevitable or cosmically accidental.

    But a systems approach to existence says instead that reality is triadic. It always has this extra dimension which is the developmental one of the vague~crisp. The laws of thought, with their insistence on classical binary possibilities, is just one end of this spectrum - the crisply developed limit. And so our logic has to be larger. It must include the more radical kind of ground that is the muddy analog swamp out of which crisp counter-factually has to emerge.

    And it is this triadicy which explains why there are always the dichotomies. For life to be more self-determining, it had to also be more deliberately chancy. It had to go in opposite directions within itself as a material phenomenon to break away from the entropic muddiness that was its initial conditions.

    That is why theoretical biologists like Rosen break life down into the dichotomy of metabolism and replication, why they talk about the centrality of the epistemic cut. It is not about which came first - the development or the evolution, the metabolic processes or the genetic regulation. The first thing to happen is the division itself - the division that sets deterministic development and chancy evolution apart.
  • On materialistic reductionism

    And what do you think language is if not a (particular kind of) aesthetic phenomenon?StreetlightX

    It would be nice if you defined what you mean by aesthetic in this (apparently) ontic context.

    Sure, I agree that neurobiological evolution results in embodied valuation. There is an emotional reaction to all that is the focus of attentional processes. So there is no doubting there is a phenomenology.

    But to call it "aesthetic" is an appeal to something much more Platonic and ideal in normal usage - the holy trinity of truth, beauty and the good. And really, none of those has much to do with neurobiological level responses. Rather they too are discursive formations that have developed culturally.

    So it would be quite wrong to mix up the two levels of valuation - the biological and the cultural. Especially when you mean to use the cultural sense to describe the embodied neurobiology.

    In the words of Emanuele Coccia, "language is a superior form of sensibility." There's much to say about language - if not culture itself - as a fundamentally digital (and hence self-reflexive, hierarchically structured) form of behavior, but again, there's no fundamental break from sensibility that digitality effects; not to mention that language, contrary to popular understanding, is primarily phatic - concerning intersubjective relations between speakers - rather than non-phatic - concerned with the relaying information between speakersStreetlightX

    So clearly I argue that language is not a particular kind of aesthetic phenomenon, but instead a general kind of semiotic mechanism. And so philosophy would need to consider the way language does mark a new level of break.

    Again, there is continuity of semiosis in nature - at least from a Peircean pansemiotic perspective. So biological organisation and systems of meaning (your aesthetics/sensate body) are also explained by semiotic mechanism (messages, switches, paths, codes). But then there is a radical stepping up of things because of language and cultural evolution.

    Now a point about semiotics as a theory of meaning - why it is not a hollow term like "aesthetics" - is that it can be explained in material terms. Or rather, as formally dichotomous to materiality.

    Symbols create a further informational dimension to reality - one hidden within the material world with its dissipative flows. This is what Pattee's epistemic cut, or Rosen's categoric distinction between metabolism and replication, is about.

    Just as a computer's circuits can symbolically represent any idea for the same physical cost, so DNA can represent any protein (and hence the organisation of any metabolic process), and words can represent any thought (and hence the organisation of any social process).

    Thus we have a physicalist account with semiosis. Symbols can regulate material flows because they exist in a dimension of information orthogonal to those flows. They stand apart to create a source of action that the physical world simply can't prevent .

    So unlike this vague notion of aesthetics or phenomenological value, semiotics speaks to an actual fundamental physical break that is matter~symbol. And then - in foregrounding the issue of the machinery - it also says why there is a radical difference between animals (with just genes and neurons) and then humans (with genes, neurons and words - and numbers now too).

    So to talk about language as a superior form of sensibility is crap. Sensibility is the product of genes and neurons (even animals are aware and anticipating). But words and numbers play out at a new cultural and abstract level of semiosis.

    Yes, the two are intertwined intimately in neurodevelopment. Language structures sensibility - and needs in return to be grounded in sensibility. But they evolve in separate worlds. The senses evolve biologically, discursive structures evolve culturally. And it is the right kind of thing for words to be doing to regulate that sensibility in pursuit of social goals. That is nature in action. Selfhood - and the aesthetic attitudes that might seem bound up in that - is a social construction.

    So this is about orientation. You wave the banner of embodied cognition as if you are anti- the notion of symbolic abstraction being still part of nature. Whereas I see it as part of the continuity of nature - nature's other hidden dimension. You say language is just more sensate bodilyness - a means to co-ordinate intersubjectivity. And sure, that is the everyday part of it - getting the social group to feel the same way. But then language does also develop an intellectual life of its own that clearly goes beyond immediate human needs and wanders off into metaphysics and mathematics and cosmology.

    We were already the vessel for social ideas playing out way above our heads. And now even abstracta has taken off as almost a lifeform of its own.

    Again, I have no problem with debates over whether this is a good or bad thing, a natural or artificial thing. There are arguments both ways. But the point is that semiosis gives you an ontic framework that its precise here. Whereas your use of "aesthetics" as a theory of meaning seems vague, ill-founded, and unilluminating so far. It seems merely to exist as a way to force through whatever popular PC politics is the predominant meme within your own social peer group. As you have employed it to date, it is a tool of rhetoric, not philosophy.
  • General purpose A.I. is it here?

    I was seeking to make a distinction between simulating a human being and simulating general intelligence....I was using the criterion of if a computer could learn any problem and or solution to a problem that a human could...m-theory

    Ok. But from my biophysical/biosemiotic perspective, a theory of general intelligence just is a theory of life, a theory of complex adaptive systems. You have to have the essence of that semiotic relation between symbols and matter built in from the smallest, simplest scales to have any "intelligence" at all.

    So yes, you are doing the familiar thing of trying to abstract away the routinised, mechanics-imitating, syntactical organisation that people think of as rational thought or problem solving. If you input some statement into a Searlean Chinese room or Turing test passing automaton, all that matters is that you get the appropriate output statement. If it sounded as though the machine knew what you were talking about, then the machine passes as "intelligent".

    So again, fine, its easy to imagine building technology that is syntactic in ways that map some structure of syntax that we give it on to some structure of syntax that we then find meaningful. But the burden is on you to show why any semantics might arise inside the machine. What is your theory of how syntax produces semantics?

    Biology's theory is that of semiotics - the claim that an intimate relation between syntax and semantics is there from the get-go as symbol and matter, Pattee's epistemic cut between rate independent information and rate dependent dynamics. And this is a generic theory - one that explains life and mind in the same physicalist ontology.

    But computer science just operates on the happy assumption that syntax working in isolation from material reality will "light up" in the way brains "light up". There has never been any actual theory to back up this sci fi notion.

    Instead - given the dismal failure of AI for so long - the computer science tendency is simply to scale back the ambitions to the simplest stuff for machines to fake - those aspects of human thought which are the most abstractly syntactic as mental manipulations.

    If you just have numbers or logical variables to deal with, then hey, suddenly everything messy and real world is put at as great a distance as it can possibly be. Any schoolkid can learn to imitate a calculating engine - and demonstrate their essential humanness by being pretty bad, slow and error-prone at it, not to mention terminally bored.

    Then we humans invent an actual machine to behave like a machine and ... suddenly we are incredibly impressed at its potential. Already our pocket calculators exceed all but our most autistic of idiot savants in rapid, error-free, syntactical operation. We think if a pocket calculator can be this unnaturally robotic in its responses, then imagine how wonderfully conscious, creative, semantic, etc, a next generation quantum supercomputer is going to be. Or some such inherently self-contradicting shit.
  • General purpose A.I. is it here?

    But I don't agree that we have to solve the origin of life and the measurement problem to solve the problem of general intelligence.m-theory

    Great. So in your view general intelligence is not wedded to biological underpinnings. You have drunk the Kool-Aid of 1970s cognitive functionalism. When faced with a hard philosophical rebuttal to the hand-waving promises that are currency of computer science as a discipline, suddenly you no longer want to care about the reasons AI is history's most over-hyped technological failure.

    I suppose if you want to argue that the mind ultimately takes place at a quantum scale in nature then Pattee may well be correct and we would have to contend with the issues surrounding the measurement problem.m-theory

    That is nothing like what I suggest. Instead I say "mind" arises out of that kind of lowest level beginning after an immense amount of subsequent complexification.

    The question is whether computer hardware can ever have "the right stuff" to be a foundation for semantics. And I say it can't because of the things I have identified. And now biophysics is finding why the quasi-classical scale of being (organic chemistry in liquid water) is indeed a uniquely "right" stuff.

    I explained this fairly carefully in a thread back on PF if you are interested....
    http://forums.philosophyforums.com/threads/the-biophysics-of-substance-70736.html

    So here you are just twisting what I say so you can avoid having to answer the fundamental challenges I've made to your cosy belief in computer science's self-hype.

    What is wrong with bayesian probability I don't get it either?m-theory

    I thought you were referring to the gaudy self-publicist, Jeff Hawkins, of hierarchical temporal memory fame - https://en.wikipedia.org/wiki/Hierarchical_temporal_memory

    But Bayesian network approaches to biologically realistic brain processing models are of course what I think are exactly the right way to go, as they are implementations of the epistemic cut or anticipatory systems approach.

    Look, it's clear that you are not even familiar with the history of neural networks and cybernetics within computer science, let alone the way the same foundational issues have played out more widely in science and philosophy.

    Don't take that as in insult. It is hardly general knowledge. But all I can do is point you towards the arguments.

    And I think they are interesting because they are right at the heart of everything - being the division between those who understand reality in terms of Platonic mechanism and those who understand it in terms of organically self-organising processes.
  • General purpose A.I. is it here?

    We might disembody a head and sustain the life of the brain without a body by employing machines.
    Were we to do so we would not say that this person has lost a significant amount of their mind.
    Would we?
    m-theory

    That is irrelevant because you are talking about an already fully developed biology. The neural circuitry that was the result of having a hand would still be attempting to function. Check phantom limb syndrome.

    Then imagine instead culturing a brain with no body, no sense organs, no material interaction with the world. That is what a meaningful state of disembodiment would be like.

    My notion was that we might hope to model something like the default mode network.m-theory

    That is simply how the brain looks when attention is in idle mode with not much to do. Or indeed when attention is being suppressed to avoid it disrupting smoothly grooved habit.

    If you state that the origins of life must be understood in order that we understand the mind that is claim that entails burdens of proof.m-theory

    Who is talking about the origins of life - the problem of abiogenesis? You probably need a time machine to give an empirical answer on that.

    I was talking about the biological basis of the epistemic cut - something we can examine in the lab today.

    The main issue at hand is whether or not computational theory of the mind is valid.
    Not whether or inorganic matter can compute.
    m-theory

    Again, we know that biology is the right stuff for making minds. You are not expecting me to prove that?

    And we know that biology is rooted in material instabilty, not material stabilty? I've given you the evidence of that. And indeed - biosemiotically - why it has to be the case.

    And we know that computation is rooted in material stability? Hardware fabrication puts a lot of effort into achieving that, starting by worrying about the faintest speck of dust in the silicon foundry.

    And I've made the case that computation only employs syntax. It maps patterns of symbols onto patterns of symbols by looking up rules. There is nothing in that which constitutes an understanding of any meaning in the patterns or the rules?

    So that leaves you having to argue that despite all this, computation has the right stuff in a way that makes it merely a question of some appropriate degree of algorithmic complication before it "must" come alive with thoughts and feelings, a sense of self and a sense of purpose, and so you are excused of the burden of saying just why that would be so given all the foregoing reasons to doubt.
  • General purpose A.I. is it here?

    No of course I don't agree that the best theory of the mind must be biological.m-theory

    Yes. But you don't agree because you want to believe something different without being able to produce the evidence. So at this point it is like arguing with a creationist.

    I offered the that the pomdp could be a resolution.
    You did not really bother to suggest any reason why that view was not correct.
    m-theory

    But it is a resolution in being an implementation of the epistemic cut. It represents a stepping back into a physics-free realm so as to speak about physics-constrained processes.

    The bit that is then missing - the crucial bit - is that the model doesn't have the job of making its own hardware. The whole thing is just a machine being created by humans to fulfill human purposes. It has no evolutionary or adaptive life of its own.

    Mind is only found in living organic matter therefor only living organic matter can have a mind.
    That is an unasailable argument in that it defines the term mind to the exclusion of inorganic matter.
    But that this definition is by necessity the only valid theory of the mind is not simply a resolved matter in philosophy.
    m-theory

    Fortunately we only have to consider two theories of mind in this discussion - the biological and the computational. If you want to widen the field to quantum vibrations, ectoplasm, psychic particles or whatever, then maybe you don't see computation as being relevant in the end?

    It is not immediately clear to me how this general statement can be said to demonstrate necessarily that computation can not result in a mind.m-theory

    So computation is a formal action - algorithmic or rule-bound. And yet measurement is inherently an informal action - a choice that cannot be computed. Houston, we have a problem.

    My argument is on the first page below Tom's post.m-theory

    That's a good illustration of the absolute generacy of the measurement problem then. To have a formal theory of the mind involves also the informal choice about what kind of measurement stands for a sign of a mind.

    We are talking Godelian incompleteness here. In the end, all formal reasoning systems - all syntactical arguments - rely on having to make an abductive and axiomatic guess to get the game started. We have to decide, oh, that is one of those. Then the development of a formal model can begin by having agreed a basis of measurement.

    But then you mix up the issue of a measurement basis with something different - the notion of undecidability in computation.

    Science models the world. So as an open-ended semiotic process, it doesn't have a halting problem. Instead, it is free to inquire until it reaches a point of pragmatic indifference in the light of its own interests.

    You are talking about a halting problem analogy by the sound of it. And that is merely a formal property of computational space. Some computational processes will terminate, others have a form that cannot. That is something quite different.
  • General purpose A.I. is it here?

    I don't really have time to explain repeatedly that fundamentally I don't agree that relevant terms such as these examples are excluded from computational implantation.m-theory

    Repeatedly? Once properly would suffice.

    This link seems very poor as an example of a general mathematical outline of a Godel incompleteness facing computational theories of the mind.m-theory

    Read Rosen's book then.

    Perhaps if you had some example of semantics that exists independently and mutually exclusive of syntax it would be useful for making your point?m-theory

    You just changed your wording. Being dichotomously divided is importantly different from existing independently.

    So it is not my position that there is pure semantics anywhere anytime. If semantics and syntax form a proper metaphysical strength dichotomy, they would thus be two faces of the one developing separation. In a strong sense, you could never have one without the other.

    And that is indeed the basis of my pan-semiotic - not pan-psychic - metaphysics. It is why I see the essential issue here the other way round to you. The fundamental division has to develop from some seed symmetry breaking. I gave you links to the biophysics that talks about that fundamental symmetry breaking when it comes to pansemiosis - the fact that there is a nano-scale convergence zone at the thermal nano-scale where suddenly energetic processes can be switched from one type to another type at "no cost". Physics becomes regulable by information. The necessary epistemic cut just emerges all by itself right there for material reasons that are completely unmysterious and fully formally described.

    The semantics of go was not built into AlphaGo and you seem to be saying that because a human built it that means any semantic understanding it has came from humans.m-theory

    What a triumph. A computer got good at winning a game completely defined by abstract rules. And we pretend that it discovered what counts as "winning" without humans to make sure that it "knew" it had won. Hey, if only the machine had been programmed to run about the room flashing lights and shouting "In your face, puny beings", then we would be in no doubt it really understood/experienced/felt/observed/whatever what it had just done.

    Again I can make no sense of your "physics free" insistence here.m-theory

    So you read that Pattee reference before dismissing it?

    And again it is not clear that there is an ontic issue and the hand waving of obscure texts does not prove that there is one.m-theory

    I can only hand wave them if you won't even read them before dismissing them. And if you find them obscure, that simply speaks to the extent of your scholarship.

    I did not anticipate that you would insist that I define all the terms I use in technical detail.
    I would perhaps be willing to do this I if I believed it would be productive, but because you disagree at a more fundamental level I doubt giving technical detail will further or exchange.
    m-theory

    I've given you every chance to show that you understand the sources you cite in a way that counters the detailed objections I've raised.

    Pompd is the ground on which you said you wanted to make your case. You claimed it deals with my fundamental level disagreement. I'm waiting for you to show me that with the appropriate technical account. What more can I do than take you at your word when you make such a promise?
  • Objective Truth?

    See also the notion of "pansemiosis" that has become in-vogue among some of Peirce's successors in contemporary semiotic theory. The story being told in both cases goes something like this: there's no problem of how thought maps to the world because the structure of the world matches the structure of thought.Aaron R

    I think pansemiosis has to be more subtle than that. It says instead that the structure of thought and the structure of the world both share the deeper structure that is the structure of semiosis, or the sign relation.

    So in practice, existence is still divided into thinking creatures and thoughtless world (by the epistemic cut of a modelling relation). Otherwise pansemiosis starts to become indistinguishable from panpsychism.
  • Metaphysics as Selection Procedure

    If the territory the map covers is everything, then the map has to include itself - the map become a part of the territory. That's what makes me a little wary of all theories of everything, this kind of recursive implosion.csalisbury

    You will first note of course that Pattee is saying the map is an atemporal truth. It is the rate independent information or model used to constrain the rate dependent dynamics, ie: the world of material possibility.

    And then why does the map have to include itself? Semiotics is expressedly about a modelling relation. It is irreducibly triadic in that regard. That is its major distinction from other more simplistic and familiar metaphysical frameworks.

    So what semiotics talks about is the functional wholeness of a relation between map and territory.

    You also have to respect the shift from epistemology to ontology. So if we are talking about ontic strength semiosis - as biosemiosis and pansemiosis do - then the map is actually in a relation that is adaptively making the world. It is not just a description (to be interpreted by a transcendent mind) but the act of interpretance itself by which a world is achieving crisp and stable existence.

    You could think of the map more as a blueprint - an encoding of formal and final cause along the lines of a genome. It describes the landscape as it is meant to be.

    So selfhood becomes the entire production - just as it is in standard biology. Selfhood is immanent in the modelling relation. And selfhood is only even possible due to there being the kind of semiotic epistemic cut that Pattee, following von Neumann, describes.

    A scientific or metaphysical theory of everything would then - in the semiotic view - have that same character. It would be a "map" of the modelling relation, or sign relation, itself. It would be a representation of the fundamental algorithm of self-organisation if you like. So it would be speaking about physical existence in terms of emergent selfhood or universal individuation.

    You fear the recursive implosion after I have advertised the advantages of what is in fact a recursive explosion - the open ended generativeness of a fundamental relation. But perhaps you can see that is not an issue now. Simplicity can beget complexity, but simplicity can't get simpler if it is already as simple as it is possible to get.

    To use another analogy, a circle can be distorted in all sorts of ways to make more complicated shapes. But you can't get simpler than a perfect circle. So a circle doesn't suffer a recursive implosion. It instead emerges as the crisp asymptotic limit on any implosion.
  • Metaphysics as Selection Procedure

    Yeah, but as soon as your private experience is framed by yourself as an argument, it is social, even if never in fact articulated publicly. So to be mapped is already crossing the line that is the epistemic cut upon which human introspective "self consciousness" is constructed. It invokes the "self" as the interpreter of a sign, the sign being now the observable, the claimed phenomenon.

    You seem to imagine that naive experiencing of experiences is possible. But to talk about the self that stands apart from his/her experiences is already to invoke a pragmatist's sign relation.
  • Randomness

    My objection is that the QM description of truly random events is incoherent.Hanover

    To be fair to QM, it is deterministic at the wavefunction level of description. Indeed, extremely so (as it extends this determinism all the way back to the beginning of the Universe, and all the way to its end, according to some interpretations).

    So QM describes the world as rigidly bounded by a set of statistics-producing constraints. It just isn't the "regular" statistics of a classically-conceived system.

    As I mentioned, the "sign" of pure quantum randomness or spontaneity in particle decay is that it exactly conforms to a Poisson distribution. The chance of a particle decaying is unchangingly constant in time.

    And hence also the radical indeterminism, the depth of surprise, when a decay occurs "for no reason".

    A constant propensity for a decay is a state of symmetry, or maximum indifference. One moment is as good as another for the decay to happen. There is no mounting tension as there is in a classical system - pressure building until the bubble must surely burst sooner than later. So a decay isn't caused even by a general thing, let alone a particular thing. It really does "just happen" ... in a way we end up describing in desperation as due to an internally frozen propensity.

    So we know particle decay has this radical nature because a collection of identical particles will tend towards an exact Poisson distribution - a powerlaw pattern which is characterised by its absolute absence of a mean. There just is no average time to wait for the individual particle. It could happen in a split second, or at the end of time, with the same probability. As exceptionality or novelty, it is literally unbounded.

    On the other hand, we were just talking about the ideal case. And the real world is much messier. So observation or measurement, for instance, can disturb the statistics. Decay can be prevented - futher constrained - by the quantum zeno effect. The watched kettle cannot boil.

    So the pure case that produces the Poisson distribution may be an ideal description that nature - its symmetry already broken - never achieves. Yet then also we have to say that nature comes unmeasurably close as far as we human observers are concerned.

    Certainly, when we employ atomic decay as our most accurate clock to measure the world, we are relying on the ideal being achieved so as to in fact be able to tell the time. :)

    Anyway, what QM really does is take the contrasting notions of determinism and chance to their physically measureable extremes. And it then quantifies the degree of entanglement or non-separability that irreducibly remains - the Planckian uncertainty.

    Classical dynamics can't make sense of this because it just doesn't recognise the notion of "degrees of disentanglement". It takes the all or nothing approach that things are either completely free or completely controlled, completely one or completely divided.

    This is useful as it has great simplicity. And a particular statistics results - that based on the assumption of completely independent variables.

    But quantum physics recognises that issues of separation and connection are always irreducibly relative - each is the yardstick of the other, as described in the reciprocal logic of a dichotomy. And so quantum statistics has to allow for variables that can be entangled.

    Mathematically it is not incoherent. Well, at least not until you want to recover the classical view and disentangle your variables by "collapsing the wavefunction". At which point, the famous issue of the observer arises. It becomes "a choice" about how the epistemic cut to separate the variables cleanly is to be introduced. The maths is incomplete so far and can't do it for you.

    So quantum mechanics takes a step deeper into the essential mystery of nature. It differs from the classical view in putting us firmly inside our metaphysical dichotomies. Randomness and determinisim are not absolute but relative states. The new question that comes into focus is relative to what?

    Relative to a human mind is a bad answer (for a realist). Relative to each other - as in a dichotomistic relation - is logically fine but also incomplete as it does not yet explain the real world which is full of different degrees of randomness and determination. (All actual systems are a mix of constraints and freedoms.)

    So that is why eventually you need a triadic, hierarchical and semiotic metaphysical scheme. You need to add in the effects of spatiotemporal scale. A local~global separation produces a "fixed" asymmetry in the universal state of affairs. Action is now anchored according to a past which has happened and so determines the constraints, while the future is now the space of the remaining possible - the degrees of freedom still available to be spent or dissipated on chance and novelty.

    And this is the way physical theory is indeed going with its thermal models of time -
    http://discovermagazine.com/2015/june/18-tomorrow-never-was
  • Change and permanence, science, pragmatism, etc.

    You're saying that proper pragmatism is an ontic inquiry; you can always ask "why," but once you do this past the point of universal invariance, you hit a wall because there's no answer in terms of a more general kind of invariance.Pneumenon

    I mention invariance as Nozick did a good book on that (if you want a more contemporary reference to answer Rorty).

    But yes, invariance is the natural limit of skepticism. It defines the point where asking "why" no longer makes a difference. And so you might as well be quiet.

    And indeed, Witty was channeling Peirce via the proddings of Ramsey if you check out Cheryl Misak's lastest retelling of the history. So quietism does not simply have to be an epistemic cut-off, it can become the ontic terminus. Invariance is the equilibrium state where further detail cannot disrupt the global whole.

    This gets tricky because it is about reaching a metaphysics where both epistemology and ontology are saying the same thing for the same reasons. The grand project is to re-unite what has become philosophically divided.

    So Rorty is saying pragmatism means goals are entirely personal. And models of reality are completely socially constructed as a result. The distance between the phenomenal and the noumenal is .... an unbridgeable chasm in the end.

    But Peircean pragmatism says, hey look, the universe itself has a "reasoning mind". Our best model of epistemology is thus our best model of ontology. It is the same modelling mechanism (or semiotic sign relation) at work in both cases. It is just that our human or Kantian-level relating is indeed highly specific and personal, while that of the universe is at the other end of the spectrum in being maximally general and "disinterested" in any particulars. That is why the universe can be described in terms of the most generic physical laws, or statements of mathematical symmetry and symmetry breaking.

    So sure, this "pansemiosis" of Peirce (he called it objective idealism) sounds pretty mystic ... if you are still a reductionist. But it is a grand unifying project that makes plenty of sense. It accounts for what science has actually found (in itself needing to re-unite observers and observables to achieve any final theory).

    Basically, a qualified Principle of Sufficient Reason with a restriction on the kinds of explanations allowed, viz. they must be in terms of more general invariance.Pneumenon

    Well it is more complicated as you have a point of departure - vagueness - as well as one of arrival, in generality. So the genesis of questioning begins with the breaking of one (vague) level of symmetry and ends once continued questioning (or perturbation, or fluctuation) fails to make a general difference.

    And Peirce defined that in terms of the Laws of Thought. Vagueness is that to which the principle of non-contradiction does not apply. Generality is that to which the principle of the excluded middle does not apply. So at the heart of logic, these are well defined terms.

    Now I want to talk about something else here: why that particular restriction? I would assume that this is motivated by the success of natural science, but that's a guess because you have not yet said so. Does this methodology bootstrap itself out of scientific pragmatism, from "Let's do this because it works" to a more general method, a sort of conceptual ascent? Or is it some other reason?Pneumenon

    The success of natural science does prove that there is an epistemology (of modelling relations) that can lift humans out of their self-interested rut long enough to discover the disinterested invariance of existence "itself".

    And historically, the "Let's do this because it works" version of pragmatism came after - if we are talking about the highly utilitarian kind of pragmatism that James made a big hit of, by tapping right into that Enlightenment point of view which then became the familiar Yankee disconnect between the social and economic spheres of life.

    So it is crucial to point out that including the very idea of "doing this for a purpose" in pragmaticism is what makes it possible to think that the everyday desires of biologically-evolved and culturally-situated humans are far from an invariant fact of nature. Instead they are highly particular. But then also, by the same token, pragmatism can then model the notion of purpose in general. And thus it starts to make sense that even the universe is formed by its (thermodynamic) desires.

    So yes, the whole argument is immanently bootstrapping in any direction you might care to slice it. That is why it is "naturalism". There can be no transcendent get out clauses. It all has to self organise.

    Clearly for Peirce, it did arise out of scientific practice. He was - rare for a philosopher - a top scientist. But his metaphysics arose as a holistic and organicist retort to the overly reductionist and mechanical understanding of reality that Enlightenment science - the classical world of Newton - had produced in popular thought.

    So Pragmatism proper is about the unity of things. It steers the middle course by being inclusive.

    You can see the way philosophy went after the Enlightenment split things apart. You have the analytics who ran with the reductionism. They went for stories of bottom-up material and efficient cause, rejecting top-down formal and final cause as "spooky".

    Then you have the Romantic-counter reaction that particular reduction engenders - such as Post Modernism. Now - reacting directly to the popular success of techno-analytic reductionism - you have the alternative camp that says form (or structure) and finality (or meaning) are the true foundation of things. Analytics are just "weird" because they have no soul, don't get poetry, and are generally just uncool and nerdy. Purpose must again be at the metaphysical centre of existence (even if existentialism says that just means purpose as it is to be understood multfariously by "any individual".)

    But Peircean pragmatism unites by telling the Aristotelan systems story where existence is the result of a free interaction between bottom up and top down causality. The Universe is holistic in that it really is formed by all four of Aristotle's causes. They are all real and to be taken seriously.

    So I get the feeling you want to read a historical direction to this - from science to metaphysics.

    But Peirce was rejecting science as it had become (even for analytics and continentals) in order to return it to the more complete thing it once was (and is now becoming again).

    So pragmatism is a foretaste of that future science, and a return to the roots of metaphysical understanding we see across many ancient cultures in fact - not just the Greeks with Anaximander or the Hesiod, but Buddhism, Taoism, even Judaism (as in ein sof).

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.