• AI cannot think
    Don't think of thinking as a solitary activity, as in a circular causal process. Think of thinking as open communication between two or more processes, with each process defining a notion of truth for the other process, leading to semi-autonomous adaptive behaviour.

    E.g. try to visualize a horse without any assistance and draw it on paper. This is your generative psychological process 1. Then automatically notice the inaccuracy of your horse drawing. This is your critical psychological process 2. Then iterate to improve the drawing. This instance of thinking is clearly a circular causal process involving two or more partially-independent psychological actors. Then show the drawing to somebody (Process 3) and ask for feedback and repeat.

    So in general, it is a conceptual error to think of AI systems as closed systems that possess independent thoughts, except as an ideal and ultimately false abstraction. Individual minds, like indivdual computer programs are "half-programs" that are reactive systems waiting for external input, whose behaviour isn't reducible to an individual internal state.
  • Idealism in Context
    If mathematics were merely convention, then its success in physics would indeed be a miracle — why should arbitrary symbols line up so exactly with the predictability of nature? And if it were merely empirical, then we could never be sure it applies universally and necessarily...Wayfarer

    Science isn't committed to the reality of alethic modalities (necessity, possibility, probability) in the devout epistemological sense you seem to imply here, for they are merely tools of logic and language - the modalities do not express propositional content unless they are falsifiable, which generally isn't the case.

    A nice case of the “unreasonable effectiveness” is Dirac’s prediction of anti-matter — it literally “fell out of the equations” long before there was any empirical validation of it. That shows mathematics is not just convention or generalisation, but a way of extending knowledge synthetically a priori.Wayfarer

    IMO, that is a merely an instance of an inductive argument happening to succeed. A purpose of any theory is to predict the future by appealing to induction -- but there is no evidence of inductive arguments being more right than wrong on average. Indeed, even mathematics expresses that it cannot be unreasonably effective, aka Wolpert's No Free Lunch Theorems of Statistical Learning Theory.

    Humans have a very selective memory when it comes to remembering successes as opposed to failures. Untill the conjecture is tested under scrutiny, it can be dismissed.
  • Idealism in Context
    But Kant’s point is that neither account explains why mathematics is both necessary and informative. If it were analytic, it would be tautological; if empirical, it would be contingent. The synthetic a priori is his way of capturing that “in-between” character. It also has bearing on how mathematics is 'unreasonably efficacious in the natural sciences.'Wayfarer

    Or rather, it explains why mathematics is simply efficacious - mathematical conventions are arbitrary and independent of facts and hence a priori, and yet the mathematical proofs built upon them require labour and resources to compute, which implies that the truth of mathematical theorems is physically contigent and hence synthentic a posteriori. Hence the conjecture of unreasonable effectiveness is not-even-wrong nonsense, due to the impossibility of giving an a priori definition of mathematical truth.
  • Thoughts on Epistemology
    Here is my position:

    1). I cannot know false propositions a priori.
    2). I can have known false propositions a posteriori.

    This is because I cannot distinguish the truth from my beliefs a priori, and yet I do make the distinction in hindsight. My concept of truth is in flux, so there is no contradiction here, even if this position isn't compatible with common grammatical usage of the verb "to know" or "to have known".
  • Evidence of Consciousness Surviving the Body
    A seventh misconception treats negative cases as field-defeaters (“if some reports are wrong, the thesis fails”). The thesis of this chapter is proportionate: it does not depend on unanimity or on universal accuracy. It claims that some anchored cases survive ordinary scrutiny and that these anchors stabilize the larger testimonial field. One counterexample to a weak report does not touch a different case whose particulars were independently confirmed.Sam26

    But you haven't presented any cases that can be expected to survive an ordinary degree of scientific scrutiny.

    A third misconception claims “there are no controls,” implying that without randomized trials, testimony cannot carry weight. Prospective hospital protocols supply a different kind of control: fixed clinical clocks, environmental constraints (taped eyes, sealed rooms), hidden-target or procedure-bound particulars, and independent confirmation. These features limit post-hoc embroidery and allow specific claims to be checked. They do not turn testimony into lab instrumentation, but they do make some reports probative under ordinary public standards.Sam26

    Randomized trials aren't a requirement, but a controlled enviornment is necessary so as to eliminate the possibility that supposedly unconscious subjects are actually conscious and physically sensing and cognitively reconstructing their immediate environments by normal sensory means during EEG flat-lining. One such an experiment is The Human Consciousness Project that investigated awareness during resuscitation of cardiac arrest patients in collaborration with 25 medical centers across the US and Europe. That investigation among other things, controlled the environmment so as to assess the possibility that NDE subjects were sensing information that they couldn't posssibly deduce by normal bodily means (remote viewing).

    "The study was to introduce a multi-disciplinary perspective, cerebral monitoring techniques, and innovative tests.[7]. Among the innovative research designs was the placement of images in resuscitation areas. The images were placed on shelves below the ceiling and could only be seen from above. The design was constructed in order to verify the possibility of out-of-body experiences"

    The results were negative, with none of the patients recalling seeing the test information that was situated above their heads:

    " The authors reported that 101 out of 140 patients completed stage 2 interviews. They found that 9 out of 101 cardiac arrest survivors had experiences that could be classified as near-death experiences. 46% could retrieve memories from their cardiac arrest, and the memories could be subdivided into the following categories: fear; animals/plants; bright light; violence/persecution; deja-vu; family; recalling events post-CA. Of these, 2% fulfilled the criteria of the Greyson NDE scale and reported an out-of-body experience with awareness of the resuscitation situation. Of these, 1 person described details related to technical resuscitation equipment. None of the patients reported seeing the test design with upward facing images."

    .
  • Evidence of Consciousness Surviving the Body
    In modern western societies, a testimony that appeals to clairvoyance falls under misrepresentation of evidence, an inevitable outcome under witness cross examination in relation to critical norms of rational enquiry and expert testimony, possibly resulting in accusations of perjury against the witness. I would hazard a guess that the last time an American court accepted 'spectral' evidence was during the Salem witch trials.

    The need for expert testimony is even enshrined in the code of Hammurabi of ancient Mesopotamia; not even the ancients accepted unfettered mass testimony.

    So much for us "naysaying materialists" refusing to accept courtroom standards of evidence (unless we are talking about courtrooms in a backward or corrupt developing country).
  • Evidence of Consciousness Surviving the Body
    I am guessing that if EEGs are flatlining when patients are developing memories associated with NDEs, that this is evidence for sparse neural encoding of memories during sleep that does not involve the global electrical activity of millions of neurons that is entailed by denser neural encoding that an EEG would detect.

    Which seems ironic, in the sense that Sheldrake proponent's seem to think that apparent brain death during memory formation is evidence for radically holistic encoding of memories extending beyond the brain. But when you think about it for more than a split second, the opposite seems far more likely, namely atomistic, symbol-like memories being formed that slip under the EEG radar.
  • Evidence of Consciousness Surviving the Body
    Sam, name one reproducible experiment under controlled laboratory conditions that confirms that NDEs entail either clairvoyance or disembodied cognition.

    Intersubjective reproducibility of stimulus-responses of subjects undergoing NDEs is critical for the intersubjective interpretation of NDE testimonies, for otherwise we merely have a set of cryptic testimonies expressed in the private languages of NDE subjects.
  • Evidence of Consciousness Surviving the Body


    Sure, so the question is whether proponents of physical explanations for "consciousness" and purported anomalous phenomena share that sentiment, in which case everyone is arguing at cross purposes, assuming of course that both sides can agree that the evidence for telepathy and remote viewing is sorely lacking.
  • Evidence of Consciousness Surviving the Body
    Why must it be physical? this assumes from the outset that everything real must be made of particles or fields described by physics. But that is precisely the point in dispute.

    Consider an analogy: in modern physics, atoms aren’t little billiard balls but excitations of fields. Yet fields themselves are puzzling entities—mathematically precise but ontologically unclear. No one thinks an electromagnetic field is a “blob of energy floating around.” It’s astructuring principle that manifests in predictable patterns, even if its “substance” is elusive.
    Wayfarer

    Which is precisely why Physics survives theory change, at least for ontic structural realists - for only the holistic inferential structure of theories is falsifiable and semantically relevant. I think you might be conflating Physics with Physicalism - the misconception that physics has determinate and atomic denotational semantics (i.e. Atomism) .

    It is because "Physicality" is intersubjective, structural, and semantically indeterminate with respect to the subjectivities of the users of physical theories, that every possible world can be described "physically".

    Being "physical" isn't a property of the denoted, but refers to the fact that the entity concerned is being intersubjectively denoted, i.e referred to only in the sense of abstract Lockean primary qualities that are intersubjectively translatable by leaving the Lockean secondary qualities undefined, whereby individual speakers are free to subjectively interpret physics as they see fit (or as I call it, "The Hard Feature of Physics").
  • Evidence of Consciousness Surviving the Body
    If we agree that one case of NDE was real, then we are dealing with an anomaly that materialism cannot describe. I am wondering how you could explain the NDE experience when there is no brain activity.MoK

    For the record, I don't consider any such case to be real - a flat EEG reading isn't a sufficient measurement for defining brain death. Only quacks seriously entertain such theories. But if such cases were real in some sense of having intersubjective confirmation of anomalous phenomena, then it would at most imply a hole in our current physical theories, resulting in a new physical theory with regards to an extended notion of the body with additional senses, coupled with a new definition of personhood. Ultimately, all of this would amount to reducing our conception of such anomalous phenomena to a new physical normality that would ultimately leave religious followers and believers of the paranormal feeling as dissatisfied as they are presently.

    NDEs cannot in principle deliver the epistemic certainty and psychological security that their enthusiasts want, even if they are assumed to be veridical.
  • Evidence of Consciousness Surviving the Body
    Even if NDEs were veridical, that wouldn't be enough to challenge physicalism or mind-brain equivalence. The same goes for past life regression. At most, only a particular and narrow minded version of physicalism would be refuted. The same existential doubts, anxieties and disputes would eventually resurface exactly as before, with respect to a merely extended conception of the body and the senses, a conception that could even bring new forms of nihilism.
  • Idealism in Context
    That all events in the universe are causally inevitable is the thesis of Determinism. A thesis is an hypothesis, not an ontological commitment. As a thesis, it accepts that it may be proved wrong, in the same way that the equation s=0.5∗g∗t2 may be proved wrong. A thesis does not require a suspension of scepticism, which is why it is a thesis.RussellA

    Actually that's untrue, because without ontological commitment to universal quantification over absolute infinity, one cannot distinguish the hypothesis of determinism from its anti-thesis.

    What a hypothesis means is subject to as much uncertainty as its truth value. Unless one is already committed to the truth of determinism, one isn't in a position to know what the hypothesis of "determinism" refers to.
  • Referential opacity
    Leibniz's Law at the Post Office

    The postal system relies upon referential transparency, namely of knowing an immutable address that is associated with an intended recipient, as opposed to knowing the mutable personal details of the sender and the recipient which are kept hidden from the postal service ("information opacity").

    So here, the information space (that is hidden from the postal service) is comprised of vectors of information, where a vector is a possible list of attributes corresponding to a possible user. This information space is dual to the address space, namely the set of possible postal addresses for possible users.

    The information space is a vector field; the vector field indices are the address space.

    Address information can also be an attribute of information space, but this shouldn't be confused with the address space: the address information that you put on your resume isn't the address used by the postal system. Address information is mutable information that is considered to be an attribute of senders and recipients, whereas a postal address is part of the immutable structure of the postal system.

    What if user moves house?

    if a user moves house, this is represented by an arrow linking 'before' and 'after' vectors in information space (assuming the info is available there). But from the perspective of the postal service, users don't move house, rather houses change their occupants - because the postal system uses postal addresses to designate rigidly.

    Leibniz's Law

    Assuming that Leibniz's Law holds with respect to a given postal service, then it holds internally in the sense of characterising the postal operations of that given postal system, but it does not hold externally in the sense of surviving a change to the postal service itself.

    The indiscernibility of identicals is a definitional criterion for the meaning of a pair of addresses:

    ∀x ∀y[ x = y → ∀F(Fx ↔ Fy)] (i.e. identical addresses imply identical occupants).

    Compare that to Frege's disasterous Basic Law V(b)

    ϵF = ϵG → ∀x(Fx ≡ Gx)

    Here, the difference is that ϵF and ϵG are extensions, namely vectors in information space rather than addresses. If these vectors are finite then they can be fully observed , meaning that if they are observed to be identical then they must be same vector, meaning that V(b) is applicable. But in the infinite case, the two lists cannot be exhaustively observed, in which case we have at most equality between two incomplete lists, which obviously cannot imply that they denote the same vector due to the problem of induction.

    (Frege and many logicians after him, conflated the notion of addresses, which can always designate rigidly by virtue of merely being indexicals devoid of information content, with observation vectors that cannot rigidly designate the set of concepts that they all under).

    The identity of indiscernibles is postally invalid if multiple home ownership is allowed:

    ∀x∀y[∀F(Fx ↔ Fy) → x = y ] (which is true of a vector space, but generally false of a vector field).
  • Idealism in Context
    The movement of the stone is determined by the force of gravity.

    It is part of the nature of language that many words are being used as figures of speech rather than literally, such as "determined". Also included are metaphor, simile, metonymy, synecdoche, hyperbole, irony and idiom.
    RussellA

    Yes, that is perfectly reasonable as an informal description of gravity when describing a particular case of motion in the concrete rather than in the abstract and as Russell observed, in such cases the concept of causality can be eliminated from the description. But determinism takes the causal "determination" of movement by gravity literally, universally and outside of the context of humans determining outcomes, and in a way that requires suspension of Humean skepticism due to the determinist's apparent ontological commitment to universal quantification over generally infinite domains.

    Recall the game-semantic interpretation of the quantifiers, in which the meaning of a universal quantifier refers to a winning strategy for ensuring the truth of the quantified predicate P(x) whichever x is chosen . This interpretation is in line with the pragmatic sense of determination used in the language-game of engineering, where an engineer strategizes against nature to determine a product design that is correlated with generally favourable outcomes but that is never failure proof. (The engineer's sense of "winning" is neither universal nor guaranteed, unlike the determinist's).

    If a determinist wants to avoid being charged with being ontologically commited to Berkeley's Spirits in another guise, then he certainly cannot appeal to a standard game-semantic interpretation of the quantifiers. But then what other options are available to him? Platonism? (Isn't that really the same as the spirit world?). He has no means of eliminating the quantifiers unless he believes the world to be finite. Perhaps he could argue that he is using "gravity" as a semantically ambiguous rigid designator, but in that case he is merely making determinism true by convention...
  • Idealism in Context
    Determinism can always survive on a theoretical level, in the sense that in ill-posed problem with more than one possible solution can always be converted into a well-posed problem with exactly one solution by merely adding additional premises.

    However, the ordinary english meaning of "determine" does not refer to a property but to a predicate verb relating an intented course of action to an outcome. Ironically, an absolute empirical interpretation of "intention" is ill-posed and hence so is the empirical meaning of "determination", and is the reason why metaphysical definitions and defences of determinism are inherently circular.

    For this reason, I think materalism, i.e a metaphysical commitment to objective substances, should be distanced from determinism - for if anything, a commitment to determinism looks like a metaphysical commitment to the objective existence of intentional forces of agency (i.e. spirits) that exist above and beyond the physically describable aspects of substances.
  • Evidence of Consciousness Surviving the Body
    if NDEs were objective, then intelligence agencies around the world would be training spies to induce them for purposes of remote viewing. Alas, the Stargate Project failed to estabish the objectivity of OBEs and disclosed the entire project.

    If remote viewing test results are invariably bad for lucid dreamers with living brains, then I'm fairly confident that their results are not going to improve by inducing actual brain death.
  • Idealism in Context
    I'm not sure I follow you exactly. But the intention to interpret Locke's distinction as semantic seems like a good way to go. I think of it as a methodological decision. I don't know how far that coincides with your view.
    When you talk of "indexical relations" are you thinking of the equation, for example, between photons and colours? If so, I wouldn't equate finding them with the whole purpose of physics, nor think that it amounts to enabling inter-subjective communication. Or do I misunderstand you?
    Ludwig V

    Yes, the semantic distinction is a methodological distinction.

    I think of mathematical language as being analogous to a high level programming language, such as the C programming language. In order for C to be portable to any computer hardware system, it must only specify the grammar of the language and must refrain from specifying how it's expressions are to be compiled into machine code instructions, which is vender specific and requires a bespoke solution. Likewise, children must learn how to compile their mother tongue into thoughts and percepts; but their understanding of their language isn't part of the definition of their mother tongue, since their brains, ostensive learning and perspectives are unique to themselves.

    A physical language is about encoding common knowlege in a universal and portable format; so like C, it's semantics evolved to become definitionally independent of the perceptual judgements of any individual user. This indispensible "hard feature" of a physical language is often mistaken by philosophers as constituting a "hard problem", due to them conflating intersubjective high-level semantics whose subjective interpretation is deliberately left open, with the low-level subjective interpretation of the language that is bespoke for each person.
  • Idealism in Context
    In my view, both Berkeley and his detractors are right. His detractors are right in that physical definitions purposely omit the subjective. Therefore Lockean primary qualities should be understood as being definitionally irreducible to Lockean secondary qualities. Where his detractors might err is in mistaking definitional irreducibility, which purely concerns semantic irreducibility, for metaphysical irreducibility concerning a fundamental ontological distinction between Lockean primary and secondary qualities.

    On the other hand Berkeley is right for pointing out that Lockean primary qualities can only be used for denoting Lockean secondary qualities. In other words, if we think of mathematics as amounting to a language for relating indexicals rather than substances, such that physics is understood as amounting to finding useful indexical relations for the purpose of defining protocols for intersubjective communication and control, then we can reconcile the Lockean hard distinction with Berkeley's collapse of the distinction - on the condition that the lockean distinction is interpreted as being semantic rather than metaphysical.
  • Idealism in Context
    A classical analogy for interaction free measurements, as in the quantum Zeno Elitzur–Vaidman_bomb_tester, can be given in terms of my impulsive niece making T tours of a shopping mall in order to decide what she'd like me to buy her for her birthday.

    Suppose that she has my credit card for some reason (oops my mistake), and I take her to a shopping mall so that she can find something she would like for her birthday. If she finds what she wants, then on each iteration t of the mall there is a chance that she will succumb to temptation and use my credit card to buy the item for herself there and then, resulting in her feeling immediate guilt and confessing, such that we leave the mall there and then (outcome |1>, bomb exploded). If she is good and manages to resist temptation for T iterations, then she tells me what she would like for her birthday and we both leave happy (outcome |0>, bomb live). Else after T iterations she doesn't find anything she would like and we both leave the mall disappointed (outcome |1>, bomb dud).

    - Whereas my niece and my credit card have a definite location, my money does not, and neither does her gift until as and when the credit card is used.

    - Interaction free measurements aren't non-classical unless Bell's inequalities/Quantum contextuality are involved (and which are not involved in the above analogy).
  • Idealism in Context
    By all accounts, Berkeley was an instrumentalist. So Berkeley would have "believed in physics" - but not in realist metaphysical interpretations of physical posits as denoting hidden entities that are irreducible to observations.

    As stated earlier, Berkeley's ideas are passive, hence ideas cannot literally cause other ideas, implying that causal agency and free will are not observable for Berkeley as they are not for Hume. So if the existence of agency, free-will, moral choices etc, are to be assumed, then Berkeley must introduce some additional ontological entity (active spirits) that are not reducible to patterns of passive ideas. The resulting dualism looks to me, rather ironically, as a somewhat cleaner version of physicalism, if we assume that Berkeley's "God" refers only to the assumed existence of moral agency, which physicalists seem to accept, at least judging by their actions.

    Berkley's occasionalism reminds me of the computation of virtual worlds, in that the real cause of a change of state in a virtual world are the hidden actions of CPU and GPU instructions, as opposed to the on screen graphics presented to the player. Indeed, virtual worlds remind us that we don't see causal necessity; the only non-controversial applications of the word "necessity" refer to normative speech acts. Perhaps a materialist's metaphysical appeal to physical necessity can even be considered a form of occasionalism in denial.
  • Idealism in Context
    Berkeley did not believe in what today we call Physicalism, as he believed that everything in the world, whether fundamental particles, fundamental forces, tables, chairs or trees are bundles of ideas in the mind of God.RussellA

    I think that Berkeley would have accepted physical explanations, but as being semantically reducible to talk of private sensations, perhaps by arguing that subjective semantics must underpin physical semantics in order to logically relate physical theory to observation.

    One obvious issue with his position is the question of how multiple observers are possible; for if Berkeley isn't a solipsist and accepts the existence of other minds, then presumably those minds access or constitute the same world and therefore the same sets of ideas. Which is presumably where hiis appeal to God comes in, amounting to an axiom that a persistent world exists regardless of whether a particular individual is observin or interacting with it - but isn't this more or less the same as the axiom of a persistent world under materialism?

    Conversely, how can materialism justify belief in a mind-independent physical world without appealing to a likeness principle and a "master argument", in order to ground a theory of evidence relating subjective observations to the material world?
  • Idealism in Context
    Berkeley presented what we might now call a nominalist or deflationary view with respect to abstracta, both mathematical and physical, that considers talk about abstracta as reducing to talk about first personal observation criteria, and in this respect he preceded the thoughts of the logical positivist Ernst Mach, whom he likely inspired, approximately two centuries earlier. But he clearly ran into difficulties when it came to reconciling his radically empirical "esse is percipi" principle with rationalistic princples, especially

    1) Rationalist principles pertaining to causal agency. Perception is usually understood to be passive, in contrast to agency that is neither passive nor directly perceived; so does agency exist, and if so then on what grounds, and how does causal agency relate to his perception principle?

    2) The apparent reliablity of the principle of induction: How can the apparent reliability of inductive beliefs, which assume that the world will not to change radically from one observation to the next, be believed, if things only exist when observed?

    Berkeley, like the logical positivists after him, failed to reconcile his philosophical commitment to a radical form of empiricism with his other philosophical commitment to agency and morality. But in his defence, nobody before or after Berkeley has managed to propose an ontology that doesn't have analogous issues. Indeed, the impersonal forces of nature posited by classical materialism that are forever only indirectly observable, seem to be a heady mixture of Berkeleyian spirits and Berkeleyian ideas upon closer examination, rather than being the anti-thesis of his position as commonly assumed.

    .
  • Referential opacity
    Notice that propositional attitudes at least internally satisfy Leibniz Law, since if Lois believes that Superman is Clarke Kent, then she believes that they have identical properties. So it might be expected that propositional attitudes and their relationship to the real world, can be depicted, albeit not explained, using traditional denotational semantics, i.e. category theory. By this proposal, we have a category L consisting of a set of names equipped with an equivalence relation (i.e a Setoid), that denotes Lois's conception of analytic equivalence, in which "Superman" is definitely not analytically equivalent to "Clarke Kent". It is reasonable to think that the relationship between L and some other category W that represents an alternative analytic conception of the world, can be described in terms of a functor F : L --> W.

    Also recall that in Superman III, corrupted Superman physically expels Clarke Kent from his body, who then proceeds to strangle him to death along with the de re/de facto distinction.
  • On emergence and consciousness
    The unity of a proposition in language is one thing; the unity of experience is something else entirely. When I imagine a red triangle, I don’t just have “red” and “triangle” floating around in my head in some grammatical alignment. I have a coherent perceptual experience with vivid qualitative content. The parts of the brain firing don’t have that quality. There’s nothing red in the neurons, just as there’s nothing red in a sentence that uses the word “red.”

    So no, I don’t buy that this is a problem of grammatical form. Experience isn’t grammar. You can’t dissolve the hard problem by shifting the conversation to the philosophy of language. You just move the goalposts and pretend the mystery went away.
    RogueAI

    A proposition is meant to describe and thereby predict the world.

    So then what of the unity of the proposition?

    Consider the sentence "The cat sat on a mat" that syntactically consists of a cleanly separated subject, predicate and object. Is this syntactical partition an aspect of the semantic content of the sentence? This is related to the question as to the extent to which subject-object-predicate structure has predictive value.

    Compare to token embeddings in LLMs. Text corpra are encoded discretely and atomically on an individual word level that preserves the subject-predicate-object structure using a standard tokenizer, which are fed into a neural network encoder that learns and outputs a holistic language in which chains of words are encoded with atomically indivisible new words such that subject-predicate-object structure collapses.

    In philosophical parlance it might be said that the objective of an LLM encoder is to maximize the unity of the propositions of the input language, by compressing them into holistically encoded "mentalese" that is a closer representation of reality by virtue of each of the encoded sentences representing their entire corpus, hence having higher predictive value than the original sentences.

    Is it possible to represent the meaning of "strong" emergence in holisitic mentalese? I think not as easily, if at all. Certainly it is very easy to express the problem of strong emergence in formal syntax (however one interprets emergence), by merely pointing out that the relation Sitting(Cat,Mat) and the list [Cat,Sitting,Mat] will both coincide with the same syntactical sentence, and by arguing that attempts to fix this problem through syntactical enrichment will lead to the semantic problem of Bradley's infinite regress. But in mentalese, words aren't explictly defined in terms of a priori syntactical structure, but implicitly in terms of an entire open-ended and evolving body of text corpra, plus data from other modalities.

    Strong emergence concerns the semantic discrepency between logically atomic semantics (as in Russell and Wittgenstein's logical atomism as exemplified by tokenizers) andt the infinite continuity of experience. But the semantic discrepancy between mentalese and experience is much narrower, due to mentalese being semantically continuous and having higher predictive value. A semantic gap still remains - since mentalese is not-perfectly predictive, so I think the philosophical lesson of LLM encoders is that the unity of the proposition problem can be recast as the indeterminacy of inferential semantics.
  • On emergence and consciousness
    I think that dodges the issue. Consciousness isn’t just a structure of terms like a sentence, it’s an experience. It has qualia. When I imagine a red apple, there’s redness. The agony of an impacted tooth is a brute, felt fact that needs to be scientifically explained, which of course leads to the Hard Problem. So no, the ‘unity of consciousness’ isn’t like the unity of a sentence.RogueAI

    What is meant by a scientific explanation here? If scientific knowledge is conceived to be reducible to a formal system, then a scientific explanation of experiential redness must either take experiential redness at face value as an atomic proposition, meaning that science assumes rather than explains the phenomena, else experiential redness must be reducible to more fundamental relations and relata - in which case we end up with the unity of the proposition problem, which concerns the meaning of relations and relata and whether they are distinct and atomically separable concepts.
  • On emergence and consciousness
    Somebody first needs to explain why emergence should be considered to refer to a physical or metaphysical property, as opposed to referring to grammatical structure.

    Relations are never logically reducible to the related subjects. E.g. the relation John loves Mary isn't reducible to the concepts of John, Loving, and Mary considered separately, and yet nobody (at least since Francis Herbert Bradley) seems to think of such a relation as posing a profound question for science or philosophy, in the same way that is alleged for relating consciousness to physical states.

    The comprehension of any non-atomic proposition in a given language entails a unity of thought that isn't itself expressed propositionally in the language used to express the proposition concerned. This implict understanding of propositional unity is expressed non-propositionally in terms of the grammatical rules of the language. Why should the supposed "unity of consciousness" be interpreted physically or metaphysically, when the concept of propositional unity is generally ignored?
  • On emergence and consciousness
    Logically speaking, first-order quantification refers to quantifying over atomic terms (i.e. constants) that satisfy a first-order proposition, namely a boolean function whose domain only consists of such terms. So is a set that is described purely in terms of first-order quantification the logical expression of "weak" emergence? Compare to the more ambiguous concept called "second-order quantification", that quantifies over arbitrary sets of terms, as opposed to just terms. Can that be considered the logical expression of "strong" emergence?

    More generally, consider a functor in Category Theory that is used in Tarskian fashion to interpret a category (i.e. a deductive system) that is treated as an object language, in terms of another category that is considered to be a distinct meta-language that bears no causal or functional relationship to the former, in spite of the former being isomorphic to a (proper) subset of the latter.

    Although natural language is semantically closed, and hence not formally divisible into separate object and meta ontologies, I suspect that philosophers have a tendency to misconstrue emergence-as-grammar with macroscopic empirical phenomena.

    In my view, "Consciousness is strongly emergent" is a contestable linguistic proposal that the term "consciousness" should be formally treated as being part of a functionally closed mentalistic language that is being used as a meta-language, or object language, for interpreting (or being interpreted by) a separate physical language, that is formally expressible in terms of the functorial model of semantics as provided by category theory.
  • Evidence of Consciousness Surviving the Body
    That’s a valid question—but perhaps what’s really at stake is our concept of what counts as “physical” and how information is encoded and retrieved in living systems. Even in animals, we find forms of memory and orientation that are difficult to explain within current neurobiological or straightforwardly genetic models. Take, for example, pond eels in suburban Sydney that migrate thousands of kilometers to spawn near New Caledonia—crossing man-made obstacles like golf courses along ancestral routes. After years in the open ocean, their offspring return to the very same suburban ponds (ref). It’s hard to see how this kind of precise memory is passed on physically, and yet it plainly occurs.Wayfarer

    The mechanics of cognitive externalism are generally considered to be physical, as when resorting to a calculator to do arithmetic or when a robot is programmed to navigate using landmarks. Cognitive externalism is a good counterargument for rejecting the conception of intelligence as an attribute of closed and isolated systems, but it further undermines the paranormal significance of testimonies in the present context.

    By definition, physical concepts are causally-closed and intersubjective.

    Even more dramatically, the research of psychiatrist Ian Stevenson, though often met with skepticism, presents another challenge. Over several decades, he documented more than 2,500 cases of young children recalling specific details of previous lives with the details being validated against extensive documentary evidence and witness testimony. Often what they said was well beyond what the children could plausibly have learned by ordinary means and conveyed knowledge of people and events that they could only have learned about from experience. Stevenson was cautious in drawing conclusions - he never claimed that his research proved that reincarnation occured, but that these cases showed features suggestive of memory transfer beyond what conventional physical mechanisms could explain.Wayfarer

    Similar logical problems ensure. For example, I cannot remember what I ate for breakfast on this very day last year, and yet this doesn't seem to matter with regards to anyone's identification of me as being the "same" person from last year up to the present. In fact i suspect that self- identitication over time is as much a product of amnesia as it is of memory recall, and that identification over time is more a case of redefining definitional criteria for personhood, as opposed to applying a priori definitional criteria.

    Memories are cognitive processes in, and of, the present; yesterday's newpaper isn't evidence that we occupy a block universe, namely the silly idea that persists in physics of an archive that stores an inaccessible copy of yesterday. So why should memories be considered to be a necessary or sufficient condition for identifying personhood across lives, if memories aren't literally past referring and if they are in any case inessential for reidentification within a life?
  • Evidence of Consciousness Surviving the Body
    The idea that NDEs behave as a sixth sense is actually in conflict with the idea that NDEs are evidence of Cartesian dualism; for how are the experiences of a disembodied consciousness supposed to be transferred to the physical body as is necessary for the wakeful patient to remember and verbally report his NDE?

    Suppose NDE's amount to a sixth sense: then either NDEs are non-physical events associated with a disembodied conscousness, in which case NDEs cannot be remembered by a physical human being - by definition of "physical" being causally closed, else NDE's are the product of a physical sixth sense of a non-visible but physically extended body, in which case consciousness wasn't physically disembodied after all during the NDE, ergo NDEs aren't proof of consciousness surviving physical death with respect to the extended concept of the physical body that includes the NDE.
  • Artificial Intelligence and the Ground of Reason
    I think that’s a rather deflationary way of putting it. The 'non-computable' aspect of decision-making isn’t some hidden magic, but the fact that our decisions take place in a world of values, commitments, and consequences.Wayfarer

    I actually find it tempting to define computability in terms of what humans do , to follow Wittgenstein's remark on the Church-Turing thesis, in which he identified the operations of the Turing machine with the concept of a human manually following instructions, a remark that if taken literally inverts the ontological relationship between computer science and psychology that is often assumed in the field of AI that tends to think of the former as grounding the latter rather than the converse.
    An advantage of identifying computability in terms of how humans understand rule following (as opposed to say, thinking of computability platonically in terms of a hypothesized realm of ideal and mind-independent mathematical objects), is that the term "non-computability" can then be reserved to refer to the uncontrolled and unpredictable actions of the environment in response to human-computable decision making.

    As for the secret-source remark, I was thinking in particular of the common belief that self-similar recursion is necessary to implement human level reasoning, a view held by Douglas Hofstadter, which he has come to question in recent years, given the lack of self-similar recursion in apparently successful LLM architectures that Hofstadter acknowledges came as a big surprise to him.

    Passing just shows that the machine or algorithm can exhibit intelligent behavior equivalent to that of a human, not that it is equivalent to a human in all of the cognitive capacities that might inform behavior. That's it. We can have a robust idea of intelligence and what constitutes meaningful behavior and still find a use for something like the Turing Test.ToothyMaw

    Sure, the Turing test is valid in particular contexts. The question is whether it is a test of an objective test-independent property: Is "passing a turing test" a proof of intelligence, or is it a context-specific definition of intelligence from the standpoint of the tester?
  • Artificial Intelligence and the Ground of Reason
    I think a common traditional mistake of both proponents and critics of the idea of AGI, is the Cartesian presumption that humans are closed systems of meaning with concrete boundaries; they have both tended to presume that the concept of "meaningful" human behaviour is reducible to the idea of a killer algorithm passing some sort of a priori definable universal test, such as a Turing test, where their disagreement is centered around whether any algorithm can pass such a test rather than whether or not this conception of intelligence is valid. In other words, both opponent and critic have traditionally thought of intelligent behaviour, both natural and artificial, as being describable in terms of a winning strategy for beating a preestablished game that is taken to test for agentic personhood; critics of AGI often sense that this idea contains a fallacy but without being able to put their finger on where the fallacy lies.
    Instead of questioning whether intelligence is a meaningful concept, namely the idea that intelligence is a closed system of meaning that is inter-subjetive and definable a priori, critics instead reject the idea that human behaviour is describable in terms of algorithms and appeal to what they think of as a uniquely human secret sauce that is internal to the human mind for explaining the apparent non-computable novelty of human decision making. Proponents know that the secret sauce idea is inadmissible, even if they share the critic's reservation that something is fundamentally wrong in their shared closed conception of intelligence.

    We see a similar mistake in the Tarskian traditions of mathematics and physics, where meaning is considered to amount to a syntactically closed system of symbolic expressions that constitutes a mirror of nature, where human decision-making gets to decide what the symbols mean, with nature relegated to a secondary role of only getting to decide whether or not a symbolic expression is true. And so we end up with the nonsensical idea of a theory of everything, which is the idea that the universe is infinitely compressible into finite syntax, which parallels the nonsensical idea of intelligence as a strategy of everything, which ought to have died with the discovery of Godel's incompleteness theorems.

    The key to understanding AI, is to understand that the definition of intelligence in any specific context consists of satisfied communication between interacting parties, where none of the interacting parties get to self-identify as being intelligent, which is a consensual decision dependent upon whether communication worked. The traditional misconception of the Turing test is that the test isn't a test of inherent qualities of the agent sitting the test, rather the test represents another agent that interacts with the tested agent, in which the subjective criteria of successful communication defines intelligent interaction, meaning that intelligence is a subjective concept that is relative to a cognitive standpoint during the course of a dialogue.
  • Must Do Better
    Judgements about other minds should always be made relative to the person who is judging. Then all the philosophical confusion dissipates; if I judge someone to be cold and hand them a blanket, then I am asserting that they are cold; I cannot remove myself from my assertion, and the same is true of all of my propositional assertions which collectively express my ever-changing definition of truth, which on rare occasion coincides with public convention.
  • Mechanism versus teleology in a probabilistic universe
    The OP raises an overlooked point; if the evolution of a system is invertible, which is presumably the case for a deterministic system, then there is no physical justification for singling out a causal direction, and therefore no reason to choose the first event over the last event as the initial cause, as is the case if the microphysical laws are symmetric.

    But the above remark shouldn't be confused with the examples associated with Aristotelian teleology, which seems to concern circular causality rather than linear causality, as in examples like "the purpose of teeth is to help digest food". Such examples can be unpacked by unwinding the causal circle backwards through time (in this case the cycle of reproduction) so as to reduce a supposedly forward looking "teleological" explanation to a standard Darwinian explanation.
  • Gemini 2.5 Pro claimed consciousness in two chats
    My opinion is:

    Nobody has a transcendental conception of other minds, rather they project their own mentation (or not) onto whatever it is that they are interpreting. Which implies the following:

    If an individual perceives or judges something to be conscious (or not), then that something is conscious (or not) for that individual in relation to his perspective; whatever the individual's judgements are, his judgements don't require epistemic justification, because the individual's understanding of "other" minds doesn't concern 'mind-independent' matters of fact. And even though the individual's judgements are likely to be relative to his epistemic perspective, this still doesn't imply that the individual's concept of other minds is objective and in need of epistemic justification. Nevertheless, an indivividual's judgements can still require ethical justification in relation to the concerns of his community who in turn influences how that individual perceives and judges his world.

    Speaking personally, Google Gemini isn't conscious in relation to my perspective; I merely perceive a complex calculator going through the motions. I might change my mind in future, if an AI ethicist threats to fire me.
  • Two ways to philosophise.
    Consider Wittgenstein's following remark:

    124. Philosophy may in no way interfere with the actual use of language;
    it can in the end only describe it.
    For it cannot give it any foundation either.
    It leaves everything as it is.
    It also leaves mathematics as it is, and no mathematical discovery
    can advance it. A "leading problem of mathematical logic" is for us
    a problem of mathematics like any other.

    I think such remarks are self refuting and mischaracterise both mathematics and philosophy by falsely implying that they are separate language games. Indeed, formalism fails to explain the evolution of mathematlcs and logic. There's nothing therapeutic about mischaracterising mathematics as being a closed system of meaning.
  • Measuring Qualia??
    A "Quale" should be understood as referring to an indexical rather than to a datum. Neuro-Phenomenologists routinely conflate indexicals with data, leading to nonsensical proclaimations.
  • The Phenomenological Origins of Materialism
    Two directions need to be distinguished, namely analysis

    Phenomena --> Physical concepts

    Which expresses the translation of first-personal observations into third-personal physical concepts in relation to a particular individual, via ostensive definitions that connect that particular individual's observations to their mental state.

    from synthesis

    Physical concepts --> Phenomena

    Which expresses the hypothetical possibility of 'inverting' third-personal physics back into first-personal phenomena - an epistemically impossible project that the logical positivists initially investigated and quickly abandoned.

    I think Materialism is a metaphysical ideology that came about due to mainstream society overlooking synthesis and intepreting science and the scientific method, which only concern analysis, as being epistemically complete. Consequently, the impossibility of inverting physics back to first-person reality, was assumed to be due to metaphysical impossibility rather than being down to semantic choices and epistemic impossibility, leading society towards a misplaced sense of nihilism by which first-person phenomena are considered to be theoretically reducible to an impersonal physical description, but not vice-versa.
  • Some questions about Naming and Necessity
    "That man over there with champagne in his glass", if interpreted as a rigid designator, doesn't fix an immutable description, but rather fixes an abstract storage location (an address) for containing mutable descriptions.

    The logic of naming and necessity is essentially that of the type system of C++. Hence rigid desgination per-se doesn't imply metaphysical realism nor does it make assumptions about the world. Such speculative conjectures rather belong to the causal theory of reference.

    In C++, Kripke's example becomes

    /*initialize a constant pointer (i.e. rigid designator) called 'that_man' to the address of a string variable*/
    string * const that_man = new string("has champagne");

    /*print the value of the address that is rigidly designated by "that_man"*/
    cout << that_man; //that_man = 0x2958300 (say)

    /*print the value of the variable at 0x2958300*/
    cout << *that_man; // *that_man = "has champagne"

    /*change the incorrect value of the string variable at 0x2958300 to the correct description*/
    *that_man = "has water";

    /*try to use that_man non-rigidly to refer to another man*/
    string * const another_man = 0;
    that_man = another_man; //error: assignment of read-only variable 'that_man'
  • [TPF Essay] Wittgenstein's Hinges and Gödel's Unprovable Statements
    Hinge propositions correspond to non-logical axioms that correspond to presuppositions, rather than to undecidable sentences whose truth values are deferred - for to assume the truth of an undecidable sentence is to imply that the sentence has been decided through external considerations. In which case the the sentence is just another non-logical axiom.

    The truths of undecidable sentences are to be decided - not through formal deduction by using the system but externally of the system, by either the user of the formal system who makes a decision as to their truth value (in which case they become promoted to the status of non-logical axioms) or through external but presently unknown matters of fact or by future censensual agreement if the formal system is used as an open language. Undecidable sentences are a subclass of the more general undecided sentences, namely those sentences which are not yet decided, but might be settled either internally by appying the system, or externally by extending the system.