• There Are No Identities In Nature
    Accordingly, anything you might say about this analog existence, this continuum, is based only in this assumption. So in order to say anything true about the continuum, your assumption of a real existing continuum must be first validated, justified. Only by validating this assumption does the nature of the continuum become intelligible. To simply assume a continuum, and say that it is of an analog nature, and completely other than the digital, is just an assumption which is completely unjustified, until it is demonstrated why this is assumed to be the case.Metaphysician Undercover

    The semiotic relation is triadic. And this insertion of an extra step - an epistemic cut - is what gets you past this kind of problem.

    So the analog thing-in-itself is vague. It only comes to be called a continuum in crisp distinction to the digital or the discrete within the realm of symbolisation or signification. It is a logical step to insist the world must be divided into A and not-A in this fashion. And then in forming this strong, metaphysical-strength, dichotomy of possibility, it can be used as a theory by which pragmatically to measure reality. We can form the counterfactually-framed belief that reality must be either discrete or continuous, digital or analog, and then test reality against this self-describing theory.

    So the situation is the reverse of the one you paint. We don't need to begin in certainty. Instead - as Peirce and Popper argued with abductive reasoning, as Goedel, Von Neumann and others demonstrated with symbolic reflexivity in general - it can all start with a reasonable guess. We can always divide uncertainty towards two dialectically self-grounding global possibilities. The thing-in-itself must be either (in the limit) discrete or continuous. And then having constructed such a sharply dichotomised state of metaphysical certainty - a logical either/or - we have the solid ground we need to begin to measure reality against that idea of its true nature. Pragmatically, we can go on to discover how true our reasoned guess seems.

    And in Kantian fashion, we never of course grasp the thing-in-itself. That remains formally vague. But the epistemic cut now renders the thing-in-itself as a digitised system of signs. We know it via the measurements that come to stand for it within a framework of theory. And in some sense this system of signs works and so endures. It is a memory of our past that is certain enough to predict our futures.

    So the assumptions here begin in a discussion of existential possibility. If anything exists - in the spatiotemporally-extended sense that we think of as "the world" - then metaphysical logic says there are two options, two extremum principles, when it comes to how that world has definite being. Either it must be continuous or discrete, connected or divided, integrated or differentiated, relational or atomistic, morphism or structure, flux or stasis, etc, etc - all the different ways at getting at essentially the same distinction when it comes to extended being.

    And having identified two complementary limits on being - terms that are logically self-grounding because they are seen to be both mutually-exclusive and jointly-exhaustive - we can be as certain of anything we can be that reality, the vague thing-in-itself, must fall somewhere between the two metaphysical-limits thus defined. Exactly where on this now crisply-defined spectrum is what becomes the subject of measurement.

    Note that this dichotomy itself encodes both the digital and the continuous in being like a line segment - a continuous line marked by two opposing end-points.

    So anyway, the very idea of the analog~discrete is based on the more primal dichotomy of the continuous~discrete - a way of talking about reality in general. But with the analog~digital, we are now drawing attention to the general semiotic matter~symbol dichotomy - the step up in material complexity represented by life and mind.

    The analog~digital dichotomy has sprung up in computation and information theory as an ontological basis for a technology - an ontology for constructing machines rather than growing organisms. And yet, in retrospective fashion, it has now become a sharper way of getting at the essence of what life and mind are about - the semiotic modelling relation that organisms have with worlds. The analogy of the code is very useful - not least because it brings so much maths with it.

    But in a sense, the analog~digital dichotomy also overshoots its mark. It leads to the idea that modeler and modeled actually are broken apart in dualistic fashion - like hardware and software. And this leads to the breakdown in understanding here - the questions about how a continuous world can be digitally marked unless it is somehow already tacitly marked in that fashion.

    So once we start to talk about the Kantian "modeler in the world", the first step is to make this essential break - this epistemic cut - of seeing it as the rise of the digital within the analog. Material events gain the power of being symbolic acts. But then we must go on to arrive at a fully triadic model of the modeling relation. And so attention returns to the middle thing which is the informal acts of measurement that a model must make to connect with its world.

    This is what is the focus of modern biosemioticians like Pattee, Rosen, Salthe and many others like Bateson, Wilden, Spencer-Brown, and so on. What is it that properly constitutes a measurement? What is it that defines a difference that makes a difference?
  • Dennett says philosophy today is self-indulgent and irrelevant
    Questions about individuation and flourishing have an obvious logical basis in common wouldn't you say? And flourishing is a pretty practical issue too. To know what it is would be to know how to do it.
  • The intelligibility of the world
    You think consciousness is amazing, but I think Life is also amazing, and we know that Life is a physical process. It is a physical process we are beginning to understand rather well, but if you look at the physical theory that explains it, there is no mention of "say, a force particle/wave or a matter particle". It is a theory of replicators subject to variation and selection. But look - a "physical" theory of abstract objects!tom

    Except biologists themselves would say it is physics regulated by something further - symbols or information.

    The two are of course related in some fashion. But you seem to be talking right past that issue - questions like how a molecule can be a message.
  • Dennett says philosophy today is self-indulgent and irrelevant
    Many discussions about modality are confused because they don't differentiate between modal systems, don't understand the difference between epistemic and deontic modality, and so on. Modal logic itself cannot tell us about the nature of possibility, but again, a logic is a mathematical object, not a metaphysical thesis.The Great Whatever

    Sorry but modal logic bypasses the essential issue of individuation. It treats possibility as countable variety and not indeterminate potential, from the get-go.

    This is largely due to the very nature of maths of course - being the science of the already countable. Give a man a hammer, etc.
  • Dennett says philosophy today is self-indulgent and irrelevant
    Thus someone like Gilbert Simondon, for example, will write the from the perspective of individuation, "at the level of being prior to any individuation, the law of the excluded middle and the principle of identity do not apply; these principles are only applicable to the being that has already been individuated; they define an impoverished being, separated into environment and individual. …StreetlightX

    This is important. Imagine how actually useful modern metaphysics would be if it were generally focused on the central question of individuation rather than being - dynamical development rather than static existence.
  • Thesis: Explanations Must Be "Shallow"
    Or perhaps a metaphysician/scientist can or has deduced the law of gravity from a more general law (gravity is just an example, not at all my interest here). Then this "law" is itself either deduced from yet a more general "law" or itself has "just because" status. Infinite regress or bust, in other words. Hence the "shallowness if explanation."who

    But isn't what really happened that Newton made a successful simplifying generalisation? So for a start, technically, it was an induction rather than a deduction.

    Newtonian gravity made the generalisation that instead of just some things falling towards other things, everything had exactly the same propensity to fall together. And then to go beyond that Newtonian generalisation would require an even more complete generalisation - like general relativity, and after that, quantum gravity.

    But while this seems like a regress - with no end in sight - you have to take into account that generalisation can only continue so long as there are local particulars to be mopped up in this fashion.

    Newtonian gravity mopped up all the different ways objects fall by saying all mass had the same basic attractive force, so the only local difference to mention is the amount of mass in some spot. Then GR mopped up that kind of particularity in saying mass and energy were both the same general stuff, and a simpler, more general, way to model attraction was positive spacetime curvature, which handled local differences in momentum. QG would take the mopping up to a logical conclusion in putting all the difference physical forces on the one quantum field theory footing.

    So what I am saying is that the inductive explanatory regress is self-limiting. It will halt at the point where it runs out of local particulars to generalise away. That is what founds a notion of a theory of everything. It is an asymptotic approach to a limit on explanation.
  • Reality and the nature of being
    The Big Bang was apparently a singularity - a planck-length point of existence that contained anything and everything that could have ever became.Albert Keirkenhaur

    That is a common misconception - that the Big Bang starts from some particular point of spacetime and then expands to fill the whole of that spacetime.

    Instead, the Big Bang is itself the development of spacetime and so where it all "starts from" is not a location but instead a scale - the Planck scale.

    The question then is what kind of thing is the Planck scale?

    And it is as this point you have to think beyond familiar classical concepts like spacetime and energy density. Quantum theory says at the Planck scale, these two things are at unity in some way that adds up to the most radical kind of uncertainty about anything existing.
  • Reality and the nature of being
    As we know energy can not be created or destroyed, but simply re-used. So one wonders how it could possibly be that energy itself even exists at all. it's really quite the puzzle..Albert Keirkenhaur

    Does physics say the Big Bang started in a high state of energy or a maximum Planck-scale state of quantum uncertainty?

    Once the uncertainty started to sort itself into the complementary things of a fundamental action happening in a spacetime - a classical kind of realm with a thermally-cooling "contents" in a thermally-spreading "container" - then we could of course talk about one aspect of this system as being the energy, the matter, the negenentropy, etc. But that is a retrospective view from the point of view of classical ontology. And can such concepts be secure in talking about the "time" when everything was maximally quantum?
  • General purpose A.I. is it here?
    In a very abstract way Chaitin shows that a very generalized evolution can still result from a computational foundation (albeit in his model it is necessary to ignore certain physical constraints).m-theory

    I listened to the podcast and it is indeed interesting but does the opposite of supporting what you appear to claim.

    On incompleteness, Chaitin stresses that it shows that machine-like or syntactic methods of deriving maths is a pure math myth. All axiom forming involves what Peirce terms the creative abductive leap. So syntax has to begin with semantics. It doesn't work the other way round as computationalists might hope.

    As Chaitin says, the problem for pure maths was that it had the view all maths could be derived from some finite set of axioms. And instead creativity says axiom production is what is infinitely open ended in potential. So that requires the further thing of some general constraint on such troublesome fecundity. The problem - for life, as Von Neumann and Rosen and Pattee argue mathematically - is that biological systems have to be able to close their own openness. They must be abe to construct the boundaries to causal entailment that the epistemic cut represents.

    As a fundamental problem for life and mind, this is not even on the usual computer science radar.

    Then Chaitin's theorem is proven in a physics-free context. He underlines that point himself, and says connecting the theorem to the real world is an entirely other matter.

    But Chaitin is trying to take a biologically realistic approach to genetic algorithms. And thus his busy beaver problem is set up in a toy universe with the equivalent of an epistemic cut. The system has a running memory state that can have point mutations. An algorithm is written to simulate the physical randomness of the real world and make this so.

    Then the outcome of the mutated programme is judged against the memory state which simulates the environment on the other side of the epistemic cut. The environment says either this particular mutant is producing the biggest number ever seen or its not, therefore it dies and is erased from history.

    So the mutating programs are producing number-producing programs. In Pattee's terms, they are the rate independent information side of the equation. Then out in the environment, the numbers must be produced so they can be judged against a temporal backdrop where what might have been the most impressive number a minute ago is already now instead a death sentence. So that part of the biologically realistic deal is the rate dependent dynamics.
  • The intelligibility of the world
    Language is also obviously constrained by actuality, by the nature of what is experienced. It also comes to constrain that experience; it is a reciprocal or symbiotic relation between perception and conception. For me that natural primordial symbiosis consists in the reception of, response to and creation of signs, and I suspect apokrisis would agree.John

    Yep. Symbiosis is a good way to think about it. It all has the causal interdependency that an ecological perspective presumes.
  • The intelligibility of the world
    So, you are going to bypass this problem by ignoring it and go on to more answerable problems? Then you are not answering the question at hand. The naked primal experience is at hand.schopenhauer1

    You forget that I was addressing the OP, not the Hard Problem.

    But we've talked about the Hard Problem often enough. I agree that there is a limit on modelling when modelling runs out of counterfactuality. And this reinforces what I have been saying about intelligibility. To be intelligible, there must be the alternative that gets excluded in presenting the explanation. And once we get down to "raw feels" like redness or the scent of a rose, we don't have counterfactuals - like how red could be other than what it is to us.

    But up until the limit, no problem. Or all Easy Problem.

    And then - challenging your more general "why should it feel like anything?" - is my response. If the brain is in a running semiotic interaction with the world in a way that it is a model of being in that world, then why should it not feel like something? Why would we expect the brain to be doing everything that it is doing and yet there not be something that it is like to be doing all that?

    Of course it requires a considerable understanding of cognitive neuroscience to have a feeling of just how much is in fact going on when brains model worlds in embodied fashion - way and above, orders of magnitude, the most complex knot of activity in the known Universe. But still, the Hard Problem for philosophical zombie believers is why wouldn't it be like something to be a brain in that precise semiotic relation to the world? Answer me that.

    Panpsychism is a different kettle of fish. It just buries its lack of explanatory mechanism as far out of sight as possible. It says don't worry folks. Consciousness is this little glow of awareness that inhabits all matter. And that is your "explanation". Tah, dah!
  • General purpose A.I. is it here?
    You will have to forgive me if I find that line to be a rather large leap and not so straight forward as you take for granted..m-theory

    Only because you stubbornly misrepresent my position.

    So, to quote von Neumann, what is the point of me being percise if I don't know what I am talking about?m-theory

    Exactly. Why say pomdp sorts all your problems when it is now clear that you have no technical understanding of pomdp?

    Here is another video of Chaitin offering a computational rebuttal to the notion that computation does not apply to evolution.m-theory

    Forget youtube videos. Either you understand the issues and can articulate the relevant arguments or you are pretending to expertise you simply don't have.
  • General purpose A.I. is it here?
    This does not make it any clearer what you mean when you are using this term.
    Again real world computation is not physics free, even if computation theory has thought experiments that ignore physical constraints.
    m-theory

    Again, real world Turing computation is certainly physics-free if the hardware maker is doing his job right. If the hardware misbehaves - introduces physical variety in a way that affects the physics-free play of syntax - the software malfunctions. (Not that the semantics-free software could ever "know" this of course.)

    We don't have a technical account of your issue.
    It was a mistake of me to try and find a technical solution prior I admit.
    m-theory

    :-}
  • General purpose A.I. is it here?
    I don't really have time to explain repeatedly that fundamentally I don't agree that relevant terms such as these examples are excluded from computational implantation.m-theory

    Repeatedly? Once properly would suffice.

    This link seems very poor as an example of a general mathematical outline of a Godel incompleteness facing computational theories of the mind.m-theory

    Read Rosen's book then.

    Perhaps if you had some example of semantics that exists independently and mutually exclusive of syntax it would be useful for making your point?m-theory

    You just changed your wording. Being dichotomously divided is importantly different from existing independently.

    So it is not my position that there is pure semantics anywhere anytime. If semantics and syntax form a proper metaphysical strength dichotomy, they would thus be two faces of the one developing separation. In a strong sense, you could never have one without the other.

    And that is indeed the basis of my pan-semiotic - not pan-psychic - metaphysics. It is why I see the essential issue here the other way round to you. The fundamental division has to develop from some seed symmetry breaking. I gave you links to the biophysics that talks about that fundamental symmetry breaking when it comes to pansemiosis - the fact that there is a nano-scale convergence zone at the thermal nano-scale where suddenly energetic processes can be switched from one type to another type at "no cost". Physics becomes regulable by information. The necessary epistemic cut just emerges all by itself right there for material reasons that are completely unmysterious and fully formally described.

    The semantics of go was not built into AlphaGo and you seem to be saying that because a human built it that means any semantic understanding it has came from humans.m-theory

    What a triumph. A computer got good at winning a game completely defined by abstract rules. And we pretend that it discovered what counts as "winning" without humans to make sure that it "knew" it had won. Hey, if only the machine had been programmed to run about the room flashing lights and shouting "In your face, puny beings", then we would be in no doubt it really understood/experienced/felt/observed/whatever what it had just done.

    Again I can make no sense of your "physics free" insistence here.m-theory

    So you read that Pattee reference before dismissing it?

    And again it is not clear that there is an ontic issue and the hand waving of obscure texts does not prove that there is one.m-theory

    I can only hand wave them if you won't even read them before dismissing them. And if you find them obscure, that simply speaks to the extent of your scholarship.

    I did not anticipate that you would insist that I define all the terms I use in technical detail.
    I would perhaps be willing to do this I if I believed it would be productive, but because you disagree at a more fundamental level I doubt giving technical detail will further or exchange.
    m-theory

    I've given you every chance to show that you understand the sources you cite in a way that counters the detailed objections I've raised.

    Pompd is the ground on which you said you wanted to make your case. You claimed it deals with my fundamental level disagreement. I'm waiting for you to show me that with the appropriate technical account. What more can I do than take you at your word when you make such a promise?
  • The intelligibility of the world
    You're saying that logic constrains thinking, and that is false, because you are making logic, which is a passive tool of thought, into something which actively constrains thought.Metaphysician Undercover

    A tool is a effective cause. A logical constraint is a formal cause. So you are confusing your Aristotelean categories here.

    But logic is not a "passive tool of thought"; on the contrary we cannot think cogently without it. IJohn

    I agree. It is the structural grounding that makes it even possible to act in a "thoughtful" way.

    Of course you can go back before the development of formal language, and even grammatical speech, and argue that animals think without this "tool".

    Yet in fact if you check the very structure of the brain, it is "logical" in a general dichotomistic or symmetry-breaking sense. It has an architecture that is making logical breaks at every point of its design.

    It starts right with the receptive fields of sensory cells. They are generally divided so that their firing is enhanced when hit centrally, and their firing is suppressed by the same stimulus hitting them peripherally. And then to balance that, a matching set of cells does the exact reverse. This way, a logically binary response is imposed on the world and information processing can begin.

    Then even when the brain becomes a big lump of grey matter, it still is organised with a dichotomous logic - all the separations between motor and sensory areas, object identity and spatial relation pathways, left vs right hemisphere "focus vs fringe" processing styles, etc.
  • Ignoring suffering for self-indulgence
    If you care about suffering, you will do something about it.darthbarracuda

    But while I arguably can't help but care about my suffering, why should I "have to" care about yours? So phrased this way, you already presume empathy as a brute fact of your moral economy?

    For me (and I think for most everyone else who isn't lacking in compassion and empathy - i.e. sociopaths, psychopaths, selfish individuals, most politicians, etc.), it seems wrong to ignore someone who just broke their leg down the block and is screaming in pain...darthbarracuda

    So yes. There is something bio-typical and evolutionarily advantageous about empathy. We can even point to the neurochemistry and brain architecture that makes it a biologically-unavoidable aspect of neurotypical human existence.

    But what then of those who are wired differently and lack such empathy. Is is moral that they should ignore such a situation, or exploit the situation in some non-empathetic fashion? If not, then on what grounds are you now arguing that they should fake some kind of neurotypical feelings of care?

    So in general I think there really is no other position to take other than to accept that those who are worse-off than we are should be sought out and helped to the best of our abilities - in other words, if the cost of us helping them is reasonably lower than the relief the victim experiences, we have a moral obligation to do so.darthbarracuda

    But that can't follow if you begin with this notion of "I care". It doesn't deal with the people who don't actually care (through no fault of their own, just bad genetic luck probably exacerbated by bad childhood experience).

    So to justify a morality based on neurotypicality is not as self-justifying as you want to claim. A consequence of such a rigid position is clearly eugenics - let's weed the unempathetic out.

    Of course we instead generally take a more biologically sound approach - recognise that variation even on empathy is part of a natural spectrum. Degrees in the ability to care are neurotypically normal. Where intervention is most justified is in childhood experience - get in there with social services. And also consider the way that "normal society" in fact might encourage un-empathetic behaviours. Then for the dangerously damaged, you lock them away.

    So to make care central, you have to deal with its natural variety in principled fashion - as well as the fact that this is essentially a naturalistic argument. Is is ought. Because empathy is commonplace in neurodevelopment, empathy is morally right.

    This leads to uncomfortable/guilty conclusions that I think modern ethicists have made an entire speculative field out of to try to mitigate: essentially much of modern ethics ends up being apologetics for not doing enough, or being a lazy, selfish individual, i.e. justifying inherent human dispositions as if they are on par with our apparent moral obligations.darthbarracuda

    From a psychological point of view, getting out and involved in ordinary community stuff is the healthy antidote to the deep pessimism that an isolationist and introverted lifestyle will likely perpetuate.

    So it is quite wrong - psychologically - to frame this in terms of people being lazy and selfish (as if these were the biologically natural traits). Instead, what is natural - what we have evolved for - is to live in a close and simple tribal relation. And it is modern society that allows and encourages a strong polarisation of personality types.

    The good thing about modern society is that it allows a stronger expression of both introversion and extraversion - the most basic psychodynamic personality dimension. And then that is also a bad thing in that people can retreat too far into those separate styles of existence.

    ....and most of all the complete abandonment of one's own personal desires in order to help others.darthbarracuda

    So from one extreme to the other, hey?

    I think you have to start with the naturalistic basis of your OP - that we neurotypically find that we care about the suffering (and happiness) of others. And then follow that through to its logical conclusions. And this complete individual self-abnegation is not a naturalistic answer. It is not going to be neurotypically average response - one that feels right given the way most people feel.
  • The intelligibility of the world
    Could someone explain to me what is wrong with the homuncular approach? People speak as if this is some big fallacy, but until the homuncular approach is proven wrong, why should we be afraid of it?Metaphysician Undercover

    Infinite regress. An explanation endlessly deferred is an explanation never actually given.
  • General purpose A.I. is it here?
    Agency is any system which observes and acts in it's environment autonomously.m-theory

    Great. Now you have replaced one term with three more terms you need to define within your chosen theoretical framework and not simply make a dualistic appeal to standard-issue folk ontology.

    So how precisely are observation, action and autonomous defined in computational theory? Give us the maths, give us the algorithms, give us the measureables.

    The same applies to a computational agent, it is embedded with its environment through sensory perceptions.m-theory

    Again this is equivocal. What is a "sensory perception" when we are talking about a computer, a syntactic machine? Give us the maths behind the assertion.

    Pattee must demonstrate that exact solutions are necessary for semantics.m-theory

    But he does. That is what the Von Neumann replicator dilemma shows. It is another example of Godelian incompleteness. An axiom system can't compute its axiomatic base. Axioms must be presumed to get the game started. And therein lies the epistemic cut.

    You could check out Pattee's colleague Robert Rosen who argued this point on a more general mathematical basis. See Essays on Life Itself for how impredicativity is a fundamental formal problem for the computational paradigm.

    http://www.people.vcu.edu/~mikuleck/rosrev.html

    I also provided a link that is extremely detailed.m-theory

    The question here is whether you understand your sources.

    Pompdp illustrates why infinite regress is not completely intractable it is only intractable if exact solutions are necessary, I am arguing that exact solutions are not necessary and the general solutions used in Pomdp resolve issues of epistemic cut.m-theory

    Yes, this is what you assert. Now I'm asking you to explain it in terms that counter my arguments in this thread.

    Again, I don't think you understand your sources well enough to show why they deal with my objections - or indeed, maybe even agree with my objections to your claim that syntax somehow generates semantics in magical fashion.

    I can make no sense of the notion that semantics is something divided apart from and mutually exclusive of syntax.m-theory

    Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps.

    To account for the competence of AlphaGo one cannot simply claim it is brute force of syntax as one might do with Deepblue or other engines.m-theory

    But semantics is always built into computation by the agency of humans. That is obvious when we write the programs and interpret the output of a programmable computer. With a neural net, this building in of semantics becomes less obvious, but it is still there. So the neural net remains a syntactic simulation not the real thing.

    If you want to claim there are algorithmic systems - that could be implemented on any kind of hardware in physics-free fashion - then it is up to you to argue in detail how your examples can do that. So far you just give links to other folk making the usual wild hand-waving claims or skirting over the ontic issues.

    The Chinese room does not refute computational theories of the mind, never has, and never will.
    It is simply suggests that because the hardware does not understand then the software does not understand.
    m-theory

    Well the Chinese Room sure felt like the death knell of symbolic AI at the time. The game was up at that point.

    But anyway, now that you have introduced yet another psychological concept to get you out of a hole - "understanding" - you can add that to the list. What does it mean for hardware to understand anything, or software to understand anything? Explain that in terms of a scientific concept which allows measurability of said phenomena.
  • The intelligibility of the world
    Then use "sense" or basic perception if experience is too vague or too complex a notion for your material cause.schopenhauer1

    You miss the point. No matter how we might refer to dasein or whatever, in pointing to it, we are already constructing a conceptualised distance from it. We are introducing the notion of the self which is taking the view of the thing from another place.

    So even phenomenology has an irreducible Kantian issue in thinking it can talk about the thing in itself which would be naked or primal experience. Any attempt at description is already categoric and so immediately into the obvious problems of being a model of the thing. You can't just look and check in a naively realistic way to see what is there. Already you have introduced the further theoretical constructs of this "you" and "the thing" which is being checked.

    Oh come now. A baby or animal doesn't have brute fact experiences? It only becomes experience through some sort of linguistic filter? Blah.schopenhauer1

    Again, to talk about animals having just brute fact experiences is both a convincing theoretical construct, but still essentially a construct.

    How do we imagine it to be an aware animal? Using reason, we can say it is probably most closely like ourselves in a least linguistic and self-conscious state - like staring out the window in a blank unthinking fashion. So we can try to reconstruct a state that is pre-linguistic. It doesn't feel impossible.

    But the point of this discussion is that it is humans that have a social machinery for structuring experience in terms of a logical or grammatical intelligibility. We actually have an extra framework to impose on our conceptions and our impressions.

    This is why there is an issue of how such a framework relates to the world itself. Is the machinery that seems epistemically useful for structuring experience somehow also essentially the same machinery by which the world ontically structures its own being? Is logic an actual model of causality in other words?

    You have to explain that better to be relevant in the conversation.schopenhauer1

    Or you have to understand better to keep up with the conversation. Definitely one or the other. :)
  • General purpose A.I. is it here?
    Semantics cannot exist without syntax.
    To implement any notion of semantics will entail syntax and the logical relationships within that syntax.
    To ground this symbol manipulation simply means to place some agency in the role of being invested in outcomes from decisions.
    m-theory

    Great. Now all you need to do is define "agency" in a computationally scalable way. Perhaps you can walk me through how you do this with pomdp?

    A notion of agency is of course central to the biosemiotic approach to the construction of meanings - or meaningful relations given that this is about meaningful physical actions, an embodied or enactive view of cognition.

    But you've rejected Pattee and biosemiotics for some reason that's not clear. So let's hear your own detailed account of how pomdp results in agency and is not merely another example of a computationalist Chinese Room.

    How as a matter of design is pomdp not reliant on the agency of its human makers in forming its own semantic relations via signs it constructs for itself? In what way is pomdp's agency grown rather than built?

    Sure, neural networks do try to implement this kind of biological realism. But the problem for neural nets is to come up with a universal theory - a generalised architecture that is "infinitely scalable" in the way that Turing computation is.

    If pomdp turns out to be merely an assemblage of syntactic components, their semantic justification being something that its human builders understand rather than something pomdp grew for itself as part of a scaling up of a basic agential world relation, then Houston, you still have a problem.

    Every time some new algorithm must be written by the outside hand of a human designer rather than evolving internally as a result of experiential learning, you have a hand-crafted machine and not an organism.

    So given pomdp is your baby, I'm really interested to see you explain how it is agentially semantic and not just Chinese Room syntactic.
  • The intelligibility of the world
    How is the panpyschist that different from a pragmatic semiotic theorist if both take experience as a brute fact?schopenhauer1

    I would put "experience" in quote marks to show that even to talk about it is already to turn it into a measurable posited within a theoretical structure.

    So the main difference is that you are taking experience as a brute fact. Essentially you are being a naive realist about your phenomenological access. Qualia are real things to you.

    I would take qualia as being the kinds of facts we can talk about - given a suitable structure of ideas is in place.

    Your approach is illogical. Either it is homuncular in requiring a self that stands outside "the realm of brute experience" to do the experiencing of the qualia. Or the qualia simply are "experiential", whatever the heck that could mean in the absence of an experiencer.

    My way is logical. It is the global structure of observation that shapes up the appearance of local observables. And these observables have the nature of signs. They are symbols that anchor the habits of interpretation.

    So in talking about qualia - the colour red, the smell of a rose - this is simply how pixellating talk goes. It is something we can learn to do by applying a particular idea of experience to the business of shaping up experience's structure. If I focus hard in the right way, I can sort of imagine redness or a rose scent in a disembodied, elemental, isolated, fashion as the qualia social script requires. I can perform that measurement in terms of that theory and - ignoring the issues - go off believing that a panpsychic pixels tale of mind is phenomenologically supported.
  • The intelligibility of the world
    Well, that is not sensation, that is the structure in which sensation works within, not the sensation itself.schopenhauer1

    So you say. But good luck with a psychology which is not focused on a structure of distinctions as opposed to your panpsychic pixels.
  • The intelligibility of the world
    I don't get how logic is sensation then. I'm all ears.schopenhauer1

    It is the structure of sensation. And sensation without structure feels like nothing (well, like vagueness to be more accurate).

    So if the world is logically structured, then that is the structure sensation needs to develop to be aware of the world.

    And the world itself must be logically structured as how else could it arrive at an organisation that was persistent and self-stable enough for there to be "a world", as opposed to a vague chaos of disorganised fluctuations?
  • The intelligibility of the world
    Also, I think you might find interest in at least some of what the analytics have to say, particularly Koslicki, Loux, Lowe and Tahko (hard-core hylomorphist neo-Aristotelians).darthbarracuda

    Any secondary literature that talks about my primary interests - Anaximander, Aristotle and Peirce - is going to be interesting to me. And the secondary literature around Aristotle is of course vast. He is the context for metaphysics, so every camp has to have something to say on that.

    But we have strayed away from the OP.

    The speculative/contentious point that I make there is the one that is represented by Anaximander and Peirce, rather than Aristotle. And that is that the Cosmos is intelligible because it itself represents a creative process that can be understood as the bootstrapping development of intelligibility.

    So as a metaphysical position, it is "way out there". :)

    But also, it is a holistic way of thinking about existence which is pretty scientific now.

    So systems science or natural philosophy is an Aristotelean four causes tradition that indeed detours through German idealist philosophers like Schelling. And then Peirce makes the connection between symbol and matter as the way to operationalise the four causes in the way modern science can recognise. Formal and final purpose become top-down constraints that shape bottom-up material and effective freedoms. And constraints become the symbolised part of nature - the information that is the memory of a system or dissipative structure.

    So the intelligibility of nature is a consequence of nature itself being a fundamentally semiotic or "mind-like" process. That is why Peirce described existence as the generalised growth in reasonableness.

    But calling it mind-like is really only to stress how far out of Kansas we are when it comes to standard issue reductionist realism which only wants to acknowledge a reality born of material and efficient cause. So calling it mind-like isn't to invoke a phenomenological notion or mind, nor the dualist notion of mind, but instead semiotics own idea of mindfulness, which is quite different in its own way metaphysically.
  • The intelligibility of the world
    The late E.J. Lowe, Jonathan Schaffer, Tuomas Tahko, Ted Sider, Susan Haack, Michael J. Loux, the late David Lewis, Peter van Inwagen, Timothy Williamson, Amie Thomasson, Sally Haslanger, David Chalmers, Kit Fine, D. M. Armstrong, Trenton Merricks, Eli Hirsch, Ernest Sosa, Daniel Korman, Jaegwon Kim, etc.

    The analytics.
    darthbarracuda

    Yep. Most of those I would be in deep disagreement with. But now because they represent the reductionist and dualistic tendency rather than the romantically confused.

    That is why I am a Pragmatist. As I said, reductionism tries to make metaphysics too simple by arriving at a dichotomy and then sailing on past it in pursuit of monism. The result is then a conscious or unwitting dualism - because the other pole of being still exists despite attempts to deny it.

    You read Heidegger, Husserl, the idealists?darthbarracuda

    Not with any great energy. I'm quite happy to admit that from a systems science standpoint, it is quite clear that the three guys to focus on are Anaximander, Aristotle and Peirce. Others like Kant and Hegel are important, but the ground slopes away sharply in terms of what actually matters to my interests.
  • The intelligibility of the world
    Also, contemporary realist metaphysics is largely concerned with ontology and not with the broader metaphysical stories.darthbarracuda

    Again, who are you talking about in particular?

    It's far more conservative than your version of metaphysics, with the only notable things I can think of being discussions of supervenience, grounding, causality and semantic meaning.darthbarracuda

    What you might be talking about just keeps getting muddier to me.
  • The intelligibility of the world
    I'd still like to know what you think are examples of bad metaphysics.darthbarracuda

    It's hard to be particular because the ways of expressing the generalised confusion of romanticism are so various. But anything panpsychic like Whitehead, or aesthetic like SX cites. I don't mind theistic approaches because they stick to a Greek framework of simplicity and so can deal with the interesting scholarly issues - right up to the point where God finally has to click in.
  • The intelligibility of the world
    What is this particular way? The semiotic trifold?darthbarracuda

    That is what I argue is the most penetrating model of it, yes.
  • The intelligibility of the world
    What legitimate differences are there between your conception of metaphysics and theoretical physics?darthbarracuda

    As I've already said, I see metaphysics and science as united by a common method of reasoning - the presumption the world is intelligible because it is actually rationally structured in a particular way.

    So the only possible other choice - given that method has become so sharply defined and unambiguous - is whatever is its sharp "other". And I am afraid we do see that other showing its Bizzaro head and claiming to be doing Bizzaro metaphysics (and also crackpot science, of course).

    Nobody pays you to think about the world, they pay you for results that can be applied to the economy in some way, and everyone's gotta pay the bills.darthbarracuda

    That is sadly true on the whole as I say. Even philosophy and fine art courses push the modern marketability of the critical thinking skills they teach.

    But still, if we are talking about who is best equipped to do metaphysical-strength thinking these days, that is a different conversation.
  • The intelligibility of the world
    I don't really understand what you have in mind when you say "romanticism" or "PoMo". Do you not appreciate Spinoza, Descartes, Husserl, Heidegger, etc? Only some? Only those who aren't easily fitted into your pragmatism?darthbarracuda

    All celebrated figures are celebrated for some reason. So I wouldn't dismiss anyone or any movement out of hand. But yes, I am saying something much stronger than merely that romanticism does not fit easily with rationalism. I'm saying it is the maximally confused "other" of rationalism.

    And pragmatism - if understood properly - is the best balance of the realist and idealist tendencies in philosophy. So it already incorporates phenomenology, or the irreducibility of being in a modelling relation with the world, in its epistemology.

    Science - as a method - isn't naive realism or even bald empiricism. It is rational idealism. It is a method that starts by accepting knowledge is radically provisional, and then working out how to proceed from there.

    Well, yes and no. If measurement is the only way of understanding the world (what I see as empiricism), then either is must be shown how philosophy utilizes measurement, or it must be seen with skepticism.darthbarracuda

    Do you think philosophy could have got going if philosophers were blind, deaf and unfeeling? Of course measurement is already involved in having sensations of the world.

    The point of philosophy is that ideas and perceptions are so biologically and culturally entangled with each other in ordinary life. So as a method, it works to separate these two aspects of the modelling relation from each other. It started by showing sensation (biological measurement) could be doubted, just as beliefs (cultural ideas) could be doubted.

    Then eventually this evolved into science where acts of measurement - turning an awareness of the world into numbers read off a dial - became the "objective" way to operate. But calling measurement objective is a little ironic given that it is so completely subjective now in being dependent on understanding the world only in terms of dial readings. Science says, well, if in the end there is only our phenomenology, our structure of experience, then lets make even measurement something consciously a phenomenological act.

    Usually philosophy utilizes things like counterfactual reasoning, thought experiments, etc. Other fields use these as well. These are generally "fuzzy" in their nature, though. When a philosopher thinks up something like, let's say, Neo-Platonism, it's extremely abstract and fuzzy.darthbarracuda

    If we are talking about metaphysics, there is nothing fuzzy about its reasoning method. The dichotomy or dialectic says quite simply that possibility must divide into either this or that - two choices that can be seen to be mutually exclusive and jointly exhaustive.

    The only thing "fuzzy" is that people then take up different positions about the result of this primary philosophical act. You can treat a dichotomy as either a problem - only one possibility can be true, the other must be false. Or the opposite to such monism is to embrace the triadic holism that resolves the division - adopt the hierarchical view where dichotomies are differentiations that also result then in integration. In splitting vague possibility apart into two crisply complementary things, that then is what becomes the basis of an existence in which the contrasts can mix. The world is the everything that can stand between two poles that represent mutually-derived extremum principles.

    In other words, a constraint is a totally different kind of thing from a zebra. The latter is studied by biologists, the former (as it is-itself) the metaphysician.darthbarracuda

    WTF? Have you ever taken a biology class? Are you so completely unaware of the impact that science's understanding of constraints has had on metaphysics? Next you will be saying Newton and Darwin told us a lot about falling apples and finch beaks, and contemporary philosophy shrugged its shoulders and said "nah, nothing to see here folks".

    I'm referring to contemporary realist analytic metaphysics.darthbarracuda

    It's true that those employed in philosophy departments struggle to produce anything much that feels new these days. The real metaphysics of this kind is being done within the theoretical circles of science itself. The people involved would be paid as scientists.

    Yet starting with Ernst Mach, there is a real tradition of encouraging a useful level of interaction. And analytic types fit in pretty well as interpreters, critics and synthesisers. At the bleeding edge of ideas, any academic boundaries are in practice rather porous.

    I think you may just have an idea that science is somehow basically off track and you need a metaphysical revolution led by philosophers to rescue it.

    So instead you see a world where science charges along, and metaphysicians look more like sucker fish hitching a ride, picking off some crumbs. And because it doesn't match your preconception, you read that picture wrong.
  • The intelligibility of the world
    There's different methods within this broad "scientific" account you presented. If you're an astronomer, you'll use a telescope. If you're a microbiologist, you'll use a microscope. If you're a chemist, you'll use a thermometer and a plethora of other expensive equipment; same goes for practically any scientific field.darthbarracuda

    Yes, the business of measurement is various.

    But I thought you were saying there are other methods of seeking intelligibility itself - methods that aren't just the general method of scientific reasoning.

    Again, my position is that the world is intelligible - it is actually is structured in terms of constraints and freedoms, global rules that shape local instances.

    And so it is not surprising that once human thinking aligns with that - once that is our conscious method of inquiry - then we find the world to be surprisingly easy to make sense of.

    And on this score, science is just applied metaphysics. It is a historical continuation of a method to its natural conclusion. Science has just taken the intelligible categories of Greek metaphysics - the dichotomous questions like is existence atomistic or is it holistic - and polished up the mathematical expression of the ideas, and the ability to then check them through a process of supporting measurements.

    You can rightfully point out that the purpose for even thinking this way about existence is a further matter of complication.

    The point about metaphysical/scientific reasoning is that it is meant to be dispassionate. It is meant to be the view of reality that transcends any particular human or social interests. By replacing gods, spirits, customs and values with a naked system of theory and measurement, the thought was that this would allow the Cosmos to speak its own truth, whatever that might be. We would see its reality unfiltered.

    But of course it is really difficult in fact to suppress all our own natural interests when investigating the world. It is obvious that even science embeds a strong human interest in gaining a mechanical/technological control over material existence. So science, in practice, is not as dispassionate as it likes to pretend.

    But still, the reasoning method is designed to let the Cosmos speak for itself as much as might be possible. It is objective in offering ways to take ourselves out of the equation as much as we let it.

    So then, on that score, scientific reasoning conjures up its own Romantic other. If cosmological reasoning - the kind that targets intelligible existence - has the goal of being dispassionate, then of course that opens the door to the notion of a counter-method based on being humanly passionate in trying to answer the same questions.

    So everything reason does, Romanticism would want to do the opposite.

    Instead of objectivity, let's have maximum subjectivity. Instead of careful measurement of the world, now any imagined idea about the world is good enough. Instead of the formal mathematical expression of ideas, let's try opaque poetic grandiloquence. Instead of expecting global intelligibility, let's expect global incoherence.

    So it is an inevitable part of rationality's success at developing itself into a tight self-supporting methodology that it should also, automatically, produce its Bizarro world other.

    I guess on that score, science could be said to have only room for the one method, modern philosophy - having less culturally patrolled boundaries - certainly has room for the two.

    But that is my analysis of the variety of methods that might exist in philosophy. I haven't heard what other methods of "reasoning" you have in mind when it comes to the standard issue approach of intelligibility-seeking metaphysics.

    The point being made, though, is what exactly is the subject matter of philosophy, in particular metaphysics, that makes it a legitimate attempt to understand the world, and why this subject matter is usually unable to be studied by more..."mainstream" science.darthbarracuda

    So it is important to you that there be a difference? Are you seeking to erect a cultural fenceline even if it need not exist? This is what I find weird about your stance.

    Or I guess not. It is daunting if it is the case that to do metaphysics in the modern era requires one to actually have a deep knowledge of science and maths as well. That's a lot of work.

    There aren't really any "discoveries" within metaphysics, just explanations of what we already see on a day-to-day basis.darthbarracuda

    Nope. That seems an utterly random statement to me. Do you have an example of current metaphysics papers of this kind?
  • The intelligibility of the world
    ...there seems to be more than one method of understanding the world.darthbarracuda

    So apart from "scientific" reasoning - a process of guessing a general mechanism, deducing its particular consequences, then checking to see if the behaviour of the world conforms as predicted - what are these other methods? Can you explain them?

    To say the world is intelligible is to say it is structured in terms of local instances of global rules. And so any method is going to boil down to seeking the global rules that can account for local instances. Where's the variety there?
  • The intelligibility of the world
    Here's a definition of self-organization I came across at BusinessDictionary.com: "Ability of a system to spontaneously arrange its components or elements in a purposeful (non-random) manner, under appropriate conditions without the help of an external agency."

    There are a number of questionable issues here.
    Metaphysician Undercover

    So this is an example of how science does think through its metaphysics. As already said to you in other threads where you have rabbited on about the nature of purpose, a naturalistic systems view demystifies it by talking about final cause in terms of specific gradations of semiosis.

    {teleomaty {teleonomy {teleology}}}.

    Or in more regular language, {propensity {function {purpose}}}.

    So we would have a mere physico-chemical level of finality as a propensity, a material tendency. A bio-genetic level of finality would be a function, as in an organism. And then a psycho-linguistic level of finality would be that which we recognise in a thinking human.

    See: http://www.cosmosandhistory.org/index.php/journal/article/view/189/284
  • The intelligibility of the world
    But the traditionalist account of intelligibility was such that it conveyed the sense of a complete, (if you like illuminated) understanding, in the sense of there no longer being any shortcoming or gap between the understanding and the thing understood.Wayfarer

    The Greeks were naturally stunned at finding that mathematical arguments have the force of logical necessity. If we take certain geometric axioms as unquestionable truths, then a whole bunch of incontrovertible results follow deductively.

    It was literally the creation of a machinery of thought. And rather than some spiritual illumination, it was a Philosophism (as a precursor to Scientism). :) Plato was the Dawkins of his day to the degree that he reduced the world to a literal abstraction. A perfect triangle or perfect sphere was something real and substantial that could be grasped via the rationality of the mind - and as an idea, acted to form up the imperfect matter of the world.

    So this worshipful approach to the awe of mathematical reason - the demonstration that axiom-generated truths looked to explain the hidden regularity of nature - was understandable as a first reaction. But we've since also learnt that maths is only as good as the assumptions contained in its axioms. So maths itself is no longer quite so magical, just pragmatically effective. Yet also our connecting of maths to the world via the scientific method has developed so much that the essential wonder - that existence is intelligible in this pragmatic modelling fashion - persists.

    Is no longer amazing that the Cosmos is intelligible - it has to be just to exist as a self-organised state of global regularity. But it is amazing that we can really get at that structure through the dynamic duo of maths and measurement.

    Or where it becomes less amazing again, we should qualify it by mentioning that humans naturally favour the knowledge that pays its own way in terms of serving humanity's most immediate interests. Which is where Scientism and reductionism comes in - the narrower view of causation that produces all our technology (including our political and economic "technology").

    Both philosophy and science are not big fans of holism. The great metaphysical system builders like Peirce and Hegel are held in deep suspicion. Neither AP nor PoMo likes grand totalising narratives. The idea that reality might be a reasonable place - actually driven by the purpose of becoming organised - is as unfashionable as it gets ... because society wants the machine thinking that creates the machines it is now dependent upon. He who pays the piper, etc.
  • The intelligibility of the world
    I think 'intelligible' traditionally relates to ordinary speech, not to philosophical discourse, and means that we can make out what the person is trying to communicate.andrewk

    Given this is a philosophy board and the OP was clearly meaning to apply the philosophical usage, talking instead about issues of ordinary language comprehension is an unhelpful sidetrack.

    I'll post the Wiki definition if it helps....

    In philosophy, intelligibility is what can be comprehended by the human mind in contrast to sense perception. The intelligible method is thought thinking itself, or the human mind reflecting on itself.

    Plato referred to the intelligible realm of mathematics, forms, first principles, logical deduction, and the dialectical method. The intelligible realm of thought thinking about thought does not necessarily require any visual images, sensual impressions, and material causes for the contents of mind.

    Descartes referred to this method of thought thinking about itself, without the possible illusions of the senses. Kant made similar claims about a priori knowledge. A priori knowledge is claimed to be independent of the content of experience.

    So the metaphysical surprise is that reality is logically structured. It appears to conform to the laws of thought. The world seem to operate with order and reason - regulated by formal/final cause or abstract rational principles.

    Traditionally, this seemed such a surprise that it was mystical. A transcendent cause of order seemed necessary because nature itself is naturally messy, with an ever-present tendency towards disorder.

    But now - through science and maths - we have discovered how structure in fact arises quite naturally in nature through fundamental principles of thermodynamic self-organisation. Disorder itself must fall into regular patterns for basic geometric reasons to do with symmetries and symmetry-breakings.

    So the intelligibility of the Cosmos is far less of an issue these days. We have things like selection principles and least action principles that explain the emergence of order even from randomness.
  • General purpose A.I. is it here?
    If the mind is something you can be sure that you have and that you can be sure correctly each time you inquire about the presence of your own mind...this would mean the term mind will be something that is fundamentally computational.m-theory

    Yeah. I just don't see that. You have yet to show how syntax connects to semantics in your view. And checking to see that "we" are still "a mind" is about an irreducibly semantic act as you could get, surely? So how is it fundamentally syntactical? Or how is computation not fundamentally syntax? These are the kinds of questions I can't get you to engage with here.

    Out of curiosity, why did you cite partially observable Markov decision processes as if they somehow solved all your woes? Did you mean to point to some specific extra they implement which other similar computational architectures don't - like having an additional step that is meant to simulate actively disturbing the world to find if it conforms with predictions?

    It seems to me still that sure we can complexify the architectures so they add on such naturalistic behaviours. We can reduce creative semantics to syntactically described routines. But still, a computer is just a machine simulating such a routine. It is a frozen state of habit, not a living state of habit.

    A real brain is always operating semantically in the sense that it is able to think about its habits - they can come into question by drawing attention themselves, especially when they are not quite working out during some moment.

    So as I've said, I agree that neural network architectures can certainly go after biological realism. But an algorithm is canonically just syntax.

    Turing computation must have some semantics buried at the heart of its mechanics - a reading head that can interpret the symbols on a tape and do the proper thing. But Turing computation relies on external forces - some clever hardware designing mind - to actually underwrite that. The computation itself is just blind, helpless, syntax that can't fix anything if a fly spot on the infinite tape makes a smudge that is somewhere between whatever is the symbol for a 1 and a 0.

    So to talk about AI in any principled way, you have to deal with the symbol grounding problem. You have to have a working theory of semantics that tells you whether your syntactic architecture is in some sense getting hotter or colder.

    A hand-wavy approach may be quite standard in computer science circles - it is just standard practice for computer boffs to over-promise and under-deliver. DARPA will fund it anyway. But that is where philosophy of mind types, and theoretical biology types, will reply it is not good enough. It is just obvious that computer science has no good answer on the issue of semantics.
  • The intelligibility of the world
    But one can have an a-utility understanding. For example: you understand that Gandalf loves his Hobbits. This is true understanding, but it is also useless understandingIVoyager

    Of course you would have to have useless understandings. That is what justifies talking about the contrary of a useful understanding. Again, this is how we render the world intelligible - A exists because not-A exists to make the existence of A crisply a fact.
  • The intelligibility of the world
    Now I'm skeptical of science alone being able to answer these questions, as if it can operate without a rudimentary metaphysical structure, but what remains to be shown is why this is the case - that is to say, why some questions are empirical and other apparently not.darthbarracuda

    It is a faulty binary to go about saying science is empirical, philosophy is rational, therefore the two are mutually exclusive. Sure, you can advance that theory of the world in a way that makes it intelligible for you. But measurement should demonstrate the faultiness of such reasoning.

    You yourself just said Schopenhauer was a rather empirical chap. And science is a deeply metaphysical exerercise, explicit in making ontic commitments to get its games going.

    So you are applying the method by which we attempt to achieve intelligibility - trying to force through some LEM based account of the world. But you are failing to support it with evidence.
  • The intelligibility of the world
    Note the mention of worth/value, which is a sort of ineffable ground.who

    Yes, it is important to a proper understanding of pragmatism - the original Peircean version rather than the popularised Jamesian one - that is isn't simply a presumption of some utilitarian ground of value. What it means to "work" - to serve a purpose - is also up for discussion as part of the epistemology. So it is really a claim about the value of a general reasoning method.
  • The intelligibility of the world
    If it is indeed the case that science has an epistemology, then this just further shows how philosophy is a separate and prior domain.darthbarracuda

    Why the snobbery? Historically, science has clearly been philosophy's best and sharpest expression of itself. It's pragmatism deals with idealism/realism in systematic self-grounding fashion.

    You seem to miss the whole point of intelligibility. It is about constraining possibility so that it leaves you with a crisp framework of yes/no binary questions about existence. And once you have a theory expressed in counterfactuals, then you can actually make matchingly crisp measurements in the name of the theory. You can answer the questions with experienced facts.

    So intelligibility is pragmatism. It doesn't mean "being able to be understood". It means being understood in that particular way.

    If you want to understand reality some other way, say a prayer or hold a seance. Or learn to write obscure PoMo texts that are the opposite of intelligible models of existence.