Comments

  • There Are No Identities In Nature
    The only problem for your view seems to be that whatever philosophical implications we might think are inherent in the maths cased science cannot themselves be expressed in mathematical language.John

    Why do you say "cannot" as if a no-go theorem applied. ;)

    But I'm not really saying that the truths of maths or logic can only be articulated in an absolutely general syntax of operators and variables. We can use ordinary (technical) language to talk through the equations with more semantic background. We can translate to a certain extent back downwards, just as we can abstract from ordinary speech towards a mathematical expression.

    So some kind of translatability is presumed. All scientists, metaphysicians and mathematicians have a native language through which they were introduced steadily to some domain of high abstraction.

    But then something important still usually feels lost in translation when they have to go from abstractions back to words. And what is lost is the clarity gained by abstracting from words to abstractions.
  • Representation and Noise
    I think it is just here where have nothing more than intuition to rely on. Anything we might believe regarding "prime matter, pure potential, unformed possibility, uninterpreted existence" will be the result of a groundless (in the empirical or logical sense) leap of faith.John

    Why can't we apply rational argument in the way that I have done to arrive at some image of the unsayable and unthinkable?

    I agree following that path is what is difficult - the most extreme abstraction. But also its seems obvious that intuition doesn't even make a start because whoever has spontaneously intuited the notion of vagueness in your experience?

    When talking about things, like the creation of existence or prime matter, normal folk only apply "intuitions" like something can't come from nothing, everything has a reason, causes precede effects, etc.

    In other words, normal folk are only going to continue to think about foundational issues using the same mechanistic habits of logic that have been drummed into them by Western enlightenment culture - a culture evolved to build machines. Or else they are going to default to the antithesis to that - Romanticism and its idealist causality, a world moved by ghostly spirits.
  • There Are No Identities In Nature
    I would instead say that maths is a branch of logic. It's a specialized form of logic, and that's what makes it so precise. But the same thing which makes it so precise, its speciality, also limits its scope, or range of applicability.Metaphysician Undercover

    Either way, the point is that they are the development of a more abstracted level of language. And increased precision doesn't have to mean a lesser scope. Quite the opposite in fact. Greater generality and greater particularity go together here.

    So if you carry out a scientific method of empirical observation which deals only with measurements, quantities, then the qualities which cannot be measured are neglected.Metaphysician Undercover

    More nonsense. Science talks about qualities in a maximally abstract fashion - notions like time, space, energy, information, entropy. And it is that clarity about qualities that engenders clarity about quantification.

    Again, I beg to differ. I am not calling for a more primitive mode of reasoning, I am calling for a less narrow minded form of observation.Metaphysician Undercover

    ...and ignoring Occam's razor. There is a good reason for wanting to quantify reality using the least number of qualitative concepts.

    But if you consider, as I suggested, that there are qualities within the world that we haven't got the capacity to measure as quantities, then to understand those qualities, we need to proceed with observations which are not measurements.Metaphysician Undercover

    Do you have a list of these unmeasurables in mind? I guess it consists of the usual things like poetry and spirit; the mind, the divine, the meaningful, the aesthetic; beauty, good and truth.

    You see I reach a different conclusion when we arrive at such abstractions without apparent ways to quanitify them - except by socio-cultural appeals to "look inwards and experience their phenomenological reality". To me, this shows we just don't have a philosophical-strength understanding of what we want to talk about.
  • Representation and Noise
    That form and material are distinct was all the point I was making to Terrapin.Mongrel

    Perhaps you could restate what exactly it might be that you are keen to discuss. Seems like you are channeling Banno at the moment. :)

    What I am getting is that there is the usual hylomorphic issue when it comes to thinking of substantial being. We only know being when it is formed into some thing. And thus the notion of unformed being becomes deeply "other".

    Somehow the stuff that accepts the form must be some kind of already formed material itself, and yet we just said that can't be. And so the "prime matter" becomes something itself immaterial - lacking the very definiteness we require of materiality. The material part of the substantial equation turns into something more akin to becoming - a potential to be.

    So when talking about wax, we can try to talk about the matter that endures or is conserved as a kind of proper material stuff by saying its all still just a bunch of atoms. The arrangement is different - a candle stick vs a wax puddle. Or we could enlarge the view and talk about the entropy change that makes a (less materialistic) material difference. In some sense, a potential has been spent. Some part of what was an orderly candle with its waxy energy bonds has been dissipated in the light and heat that helped melt the rest of it into the more entropic form of a waxy puddle.

    Yet still, atoms are a formed kind of stuff. Even energy is a formed kind of stuff - electromagnetic radiation or some other such thing. We still haven't drilled down far enough to hit bottom and discover what matter is once its formal clothing has been stripped away to leave it standing bare.

    As we were discussing earlier, even randomness is only conceivable in the guise of already formed material patterns - possibility not naked, but corralled by boundary conditions to give it statistical regularity.

    Sorry to be boringly repetitive, but it is precisely these considerations that lead me eventually mentioning vagueness as the primary material principle here.

    In some way - some way that we would have to make metaphysically good on - the deepest level of materiality would be unbound action. Unlimited fluctuation. Energy unrestrained by dimension. Chaos without boundaries.

    Talking about the world in terms of constrained form is easy. Imposing further rational pattern on found substantial actuality of the physical world is something that has become second nature to Homo mechanicus.

    But conceiving of prime matter, pure potential, unformed possibility, uninterpreted existence, is at the opposite end of metaphysics - the hardest and last thing we would do.
  • There Are No Identities In Nature
    I explained the alternative, it involves first, the recognition that our measurement techniques are inadequate for measuring some aspects of the world, in particular, the aspects associated with the assumed continuum. So we need to go back to a method of focusing on description rather than measuring.Metaphysician Undercover

    In the act of describing, the digital method (rules of logic) is applied to the tool of description, language. In the act of measuring, we tend to believe that the digital method is applied directly to the thing being measured, but this is an illusion. In reality, the limitations of the digital method have been incorporated into the language of measurement. The result is that any observations that are measurements, are necessarily theory-laden, due to the restrictions which are inherent within the measurement system. That is the position to which science has progressed today.Metaphysician Undercover

    So basically you want to use words not maths. And my point is that there is a reason why maths is where we arrive. Logic is itself a branch of maths in its highest state of development you realise?

    So first you are not talking about a different method of reasoning and measurement, just advocating for a less crisply developed level of reasoning and measurement.

    And then it is not as though I am saying there are no dangers in a more abstract level of discourse about nature. We are in some sense starting to work blind - allowing our formal tools to take over the job of explaining nature.

    But this is the way things have gone because pragmatically they have worked. Maths is unreasonably effective as they say. Reality is surprisingly intelligible.

    So your call to a more verbal and "picture in the head" level of metaphysical exploration is not actually an alternative method, just a return to a more primitive mode of scientific reasoning.

    Now there is no harm in doing some of that too. That is the way we would expect to start to develop some actually fresh insight which - if it works out - could be properly mathematised. But in being a preliminary activity, it wouldn't replace the higher level of abstraction that mathematical discourse can attain. It is not an "alternative" in that sense.
  • There Are No Identities In Nature
    So which is it - do vague and crisp map on to analog and digital or do they not? If they do, in what sense can you claim that the analog/digital distinction is derivative from vagueness (circularity). If they don't, you're back to mythology.StreetlightX

    The answer is the same as before. When we are talking about the ontology of a modelling system, we have two realms in play - the material and the symbolic. And the vague~crisp can apply as a developmental distinction in either. And indeed to the modelling relation as a whole. The vague~crisp is about a hierarchy of symmetry-breakings, a succession of increasingly specified dichotomies.

    So in the symbolic realm, a vague state of symbolism is indexical. A still vaguer state is iconic.

    If you say "look, a cat", that 's pretty definite. If you point at a cat, I might be a little uncertain as to exactly what your finger indicates. If you make mewing and purring noises, I would have to make an even greater guess about the meaning you might intend.

    So as I argued using the example of the wax cylinder, informational symmetry breaking can be weak because it is easily reversible - still strongly entangled in the physics of the situation - or it can be strongly broken in being at the digital end of the spectrum and thus as physics-free as possible.

    If I were to say "look, the universe", then physically the words involve no more effort that talking about a cat. But pointing gets harder, and pantomiming might really work up a sweat.

    But then any form of communication or representation has already crossed the epistemic cut Rubicon in creating a memory trace of the world and so made the step to being physics-free. So even vague iconicity is already crisp in that sense. And thus there is another whole discussion about how the matter~symbol dichotomy arose in nature. And a further whole discussion about whether the abiotic world - with its dissipative organisation - has pansemiotic structure, and so this notion of "digitality" as negatively-self reflexive demarcation (or the constraint of freedom) has general metaphysical import there.

    We can see that discrete~continuous is just such a general metaphysical dichotomy - the two crisp counter-matched possibilities that would do the most to divide our uncertainty about the nature of existence. And I would remind you of your opening statement where you said this was all about a generic metaphysical dichotomy that applied to all "systems"....

    Broadly speaking, one can speak of two types of systems in nature: analog and digital.StreetlightX

    So that sweeping claim is what I have been addressing. And my argument is that when it comes to reality as a system, it is just the one system - formed by dividing against itself perhaps.

    This is why I find your exposition confused - although also on the right track. So I tried to show that to resolve the dualism implicit in your framing here, we have to ascend to Peircean triadic semiosis to recover the holism of a systems' monism. We have to add a dimension of development - the vague~crisp - so as to be able to explain how the crisply divided could arise from some common source.

    Your opening statement would be accurate if it made it clear that you are talking about symbolic systems or representational systems - systems that are already the other side of the epistemic cut in being sufficiently physics-free to form their own memory traces and so transcendently can have something to say about the material state of the world.

    But instead you just made a direct analogy between analog~digital signal encoding in epistemic systems and continuous~discrete phenomena in ontic systems.

    Now again, there is something important in this move. It has to be done in a sense because the very idea of a physical world - as normally understood in its materialistic sense - just cannot see the further possibility of semiotic regulation, the new thing that is physics-free memory or syntax-based constraints. So you can't extract symbols from matter just by having a full knowledge of physical law. As you/Wilden say, the digital, the logical, the syntactical, appears to reach into the material world from another place to draw its lines, make its demarcations, point to the sharp divisions that make for a biinary "this and a that".

    So saying in a general metaphysical way that the material world is analog, and the digital is sprung on this material world from "outside itself" as a further crisply negating/open-endedly recursive surprise, is a really important ontological distinction.

    But then confusion ensues if one only talks about the source of crispness and the fact of its imposition, and neglects to fit in its "other", the vagueness which somehow is the "material ground" that takes the "formal mark" of the binary bit. Or even the analog trace.

    So to talk generically about reality as a system - which indeed is a step up from process philosophy in talking about symbol as well as matter, hierarchy as well as flow - is where we probably agree in a basic way. Structuralism was all about that. Deconstructionism was also about that - in the negative sense of trying to unravel all symbolic distinctions. Deleuze was about that I accept.

    But again, the metaphysics of systems is always going to be muddy without being able to speak about the ontically vague - Peircean Firstness, Anaximander's Apeiron, the modern quantum roil. Sure we can talk about grades of crispness - iconic vs indexical vs symbolic. But to achieve metaphysical generality, we have to be able to define crispness (computational digitality, or material substantiality/particularity/actuality) in terms of what crispness itself is not.

    And to return to your OP.....

    A few quite important things follow from this, but I want to focus on one: it is clear that if the above is the case, the very notion of identity is a digital notion which is parasitic on the introduction of negation into an analog continuum. To the degree that analog systems do not admit negation, it follows that nothing in an analog system has an identity as such. Although analog systems are composed of differences, these differences are not yet differences between identities; they are simply differences of the 'more or less', or relative degrees, rather than 'either/or' differences.StreetlightX

    ...this is where your keenness to just dichotomise, and not ground your dichotomy as itself a developmental act, starts to become a real blinkering issue.

    Analog signals are still signals (as Mongrel points out). They are differences to "us" as systems of interpretance. An analog computer outputs an answer which may be inherently vaguer than a digital device, but did use to have the advantage of being quicker. And also even more accurate in that early digital devices were 8 bit rather than 16 bit or 64 bit - or however many decimal places one needs to encode a continuous world in floating point arithmetic and actually draw a digitally sharp line close enough to the materially correct place (if such a correct place even exists in a non-linear and quantumly uncertain world).

    So whether variation or difference is encoded analogically or digitally, it already is an encoding of a signal (and involves thus a negation, a bounding, of noise). Then while the digital seems inherently crisp in being a physics-free way to draw lines to mark boundaries - digital lines having no physical width - in practice there still remains a physical trade-off.

    The fat fuzzy lines of analog computing can be more accurate at least in the early stages of technical development. The digital lines are always perfectly crisply defined whether they use 8-bit precision or 64-bit precision - this is so because a continuous value is just arbitrarily truncated (negated) at that number of decimal places. But that opens up the new issue of whether the lines are actually being dropped in the right precise place when it comes to representing nature. Being digital also magnifies the measurement problem - raises it now to the level of an "epistemic crisis". Ie: the fallacy of misplaced concreteness.

    So it just isn't good enough to say analog signals can be signals without the need for negative demarcation and the open-ended recursion that allows. A bell rings a note - produces a sine wave - because vibrations are bounded by a metal dome and so are forced to conform to a harmonic whole number. Identity or individuation does arise in analog processes - in virtue of them being proto-digital in their vaguer way.

    Yes, this is a complication of the simpler starting point you made. It is several steps further down the chain of argument when it comes to a systems ontology. And as I say, you/Wilden are starting with a correct essential distinction. We have to pull apart the realms of matter and symbol to start to understand reality in general as a semiotic modelling relation with the power to self-organise its regular habits.

    But for some reason you always get snarky when I move on to the complexities that then ensue - the complexities that systems ontologists find fruitful to discuss. The vague~crisp axis of development being a primary one.
  • There Are No Identities In Nature
    In other words, you can't have your cake and eat it too: if you insist that the analog/digital distinction is made at the level of digital mapping to begin with, the projection of a more primordial ground of vagueness is simply that: a mythological projection that doesn't abide by the very epistemological constraints you ought to be beholden too.StreetlightX

    Weird. The definition of vagueness is that it is the "not yet digitised". Vagueness is that state of affairs to which the principle of non-contradiction fails to apply. And thus it stands orthogonal to crispness, the state where A/not-A are busy doing their logically definite thing.

    So in a set theoretic sense, the vague~crisp is the superset here. As I said earlier, it is Peircean thirdness in incorporating the whole of the sign relation - the three levels of logic that would be vagueness, particularity and generality. A/not-A is just the digital crispness which is secondness, or the logic of the particular.
  • Representation and Noise
    I was talking about representation. How do you see that being related to randomness?Mongrel

    I still don't get you. All I said was that we can surely have representations of randomness. When I look at TV snow, my interpretation is that I'm staring at "white noise". I see it positively as a characteristic natural pattern, a form, and so it is not uninterpreted or unrepresented.
  • Representation and Noise
    I don't think uninterpreted means random.Mongrel

    No idea how that is a response to anything I said.
  • There Are No Identities In Nature
    It is a mistake to think that the world must fit within our systems of measurement, the "bounds" which we imposed. We must adapt our systems of measurement, shape them to the world. But even this requires a preliminary understanding, which cannot be given by measurement because the system for measurement will be created based on this understanding.Metaphysician Undercover

    Well if all this is a mistake, what is your alternative? Can you even define your epistemic method here?

    The Peircean model of scientific reason says yes, we have to begin with just a guess, a stab at an answer. And a good stab at an answer is one that tries rationally to imagine the limits that must bound that answer. That is what gives us a reference frame from which to start making actual empirical measurements. And from measurement, we can construct some history of conformity between the thing-in-itself and the way we think about the thing-in-itself. So retrospectively, any founding assumptions, any rational stabs in the dark which got things started, are either going to justified or rejected by their own empirical consequences. If they were in fact bad guesses, experience will tell us so. And contrariwise.

    Importantly (and something that goes to the analog~digital distinction as SX, channelling Wilden, has defined it), this also means that the model doesn't have to end up looking anything like what it is suppose to "represent".

    I think what troubles you is this apparent loss of veridicality. You want the kind of knowledge of the world that is literally analogic - an intuitive picture in the head. If someone is talking about atoms, you want to see a reality composed of billiard balls rattling around a table. If someone talks about development, you want to see a point moving along a drawn timeline, the future steadily moving backwards to become the past as intervals of the present get consumed.

    But higher order understanding of the world is different in being digital. It throws away the material variation to leave only the digital distinctions - the description of the boundaries or constraints, the description of the rate independent information in the shape of eternal or timeless laws and constants.

    So semiotic modelling is this curious thing of not being a re-presentation of what actually exists in all its messy glory. Instead, it is a boiling down of reality into the sparseness of abstraction entrained to particularity - the semiotic mechanism of theory and measurement.

    Sure, it is still nice to picture billiard balls, waves, timelines, and all kinds of other analogic representations of the thing-in-itself. But the digital thing is all about giving that kind of re-presentation up. In the extreme it becomes the kind of instrumentalism that SX would find disemboddied and "un-aesthetic". One may find oneself left simply with a syntax to be followed - a mathematical habit - which works (it makes predictions about future measureables) and yet for the life of us, we can't picture the "how". That's pretty much where the Copenhagen Interpretation ended up with quantum mechanics.

    So the Peircean/digital/semiotic approach to modelling the thing-in-itself is both cleanly justified in terms of epistemology, and also never going to deliver quite what you probably think it should. This is why whenever I talk about vagueness, you always just keep saying tell me about it again in a way that is not purely rational but instead gives me a picture I can believe inside my head.

    But sorry, that is what it means for modelling to be embodied, or meaning to be use. We have to head in the direction of extreme mathematical-strength abstraction so as to be able in turn to make the most precise and telling acts of measurement - to also digitise experience itself as acts of countiing, a harvesting of a field of symbols.

    You equate intelligible with measurable. But measurable is restricted by our capacity to measure. A thing is only measurable in so far as we have developed a way to measure it. However, a thing is intelligible to the extent that we have the capacity to describe it, and description does not require measurement.Metaphysician Undercover

    So just as I say, you yearn for analog iconicity - a concrete picture in your head that you can stand back and describe ... as if such a representation were the thing-in-itself floating veridically before your eyes.

    Pragmatism says that is a Kantian pipedream. A picture in your head is just going to be a picture. What actually matters - the only thing that in the end you can cling onto - is the functional relationship you can build between your model of existence, and the control that appears to give you over that existence. And the digital is stronger than the analog in that regard because it decisively erases unnecessary details. It can negate the real in a way that makes for the most useful map of the real.

    And we all know how a map bears bugger all material resemblance to the physical reality of the territory-in-itself. But who complains about a map of a country having "unreal" properties like being small and flat enough to fold up in your back pocket?
  • Representation and Noise
    Where there is the perception of noise, is there necessarily an accompanying idea of the uninterpreted... the unrepresented? IOW... is that the form associated with noise... the formless?Mongrel

    This is a fascinating issue. Can we conceive of "pure randomness"? Even white noise has a structure.Hoo

    Hoo is right. Even noise has form. Any model of randomness still depends on identifiable boundary conditions. So noise comes in many colours - https://en.wikipedia.org/wiki/Colors_of_noise

    So in an ontological sense, randomness comes in different varieties that speak to different states of global constraint. Randomness is an actual pattern. And "pure randomness" would be something "actually patternless" - what I would define as a vagueness (which is pretty unpicturable).

    But then there is a further question of how good are we are psychologically at distinguishing the various shades of randomness in the world? And of course mostly we are quite bad because we are untrained in this level of pattern recognition. Or to put it another way, mostly in life it doesn't really matter.

    Also our perceptual equipment has its own signal processing biases - like an increased sensitivity to noises in the range of spoken speech which "distorts" the bare physical pattern of energies the world might be producing. So to see types of randomness in their "wild state", we would have to somehow cancel out that kind of inbuilt perceptual bias.

    Thus in a general way, we are seeing patterns that are really there in nature when we dismiss something as just "random noise". But as patterns, they are also patterns with the least possible meaning or message. In paying attention to randomness as itself "a perceptual thing" - a field of activity like the crackling sound of white noise, or the restless firing of static on an old vacuum tube TV screen - we are thinking about precisely that which we are normally set-up to filter out. We are representing as present what we would normally want to suppress and render absent. We are making meaningful what is usually interpreted in terms of a generalised lack of significance - a collection of differences that precisely don't make a difference.
  • There Are No Identities In Nature
    As I've said quite a few times now, the distinction between the digital and the analog is quite precisely defined by the presence of negation and self-reflexivity.StreetlightX

    You are merely choosing to highlight the bit I already agree with in general fashion. From a biosemiotic viewpoint, that states the obvious.

    But what I have been pointing out is that your framing of the issues lacks the further dimensionality that would allow it to be actually developmental in the way a process view needs to be. Your way of talking about the continuum or the analog is fuzzy over the issue of fuzziness. You talk about the analog/continuum as being itself crisply existent (a realm of actualised material being), and then at other times you talk about it as a ground for further development - the less specified basis for the discrete/digital machinery that transcends it so as to have a view of it.

    Of course in your confusion, that becomes the confusion you accuse me of. I'm just patiently taking you back to the source of symmetry-breaking to show how both continuity and discreteness co-arise from pure vagueness. And analog~discrete would have arisen as modes of communication or representation in the same fashion.

    As I have said, it is important that the analog or iconic representation already exists on the other side of the epistemic cut - on the side of the symbolic or "rate independent informatiion". It is a distinction made at the level of the mapping, even if it means to be talking about a distinction in the (computational!!) world.

    And because you set off in the OP to say something logically concrete about metaphysics, you can't just gaily presume that what is true of the map is true of the territory. That further part of the argument must be properly supported.

    Either you don't understand that or you simply want to avoid the issue.

    So it is fruitless to keep trying to return me back to Wilden's perfectly acceptable 1970s analysis of the distinction between analog and digital computation. You know I agree with that.

    The interesting question is then the ontological or metaphysically-general one of how does that fact about representative modes change our conception of nature itself? What new vantage point does it give us for dealing with the central questions of process philosophy, like the mechanics of development and individuation.

    A difference that makes a difference can be described analogically or digitally, represented in terms of what it is, or what it is not. But that does not yet get at the deeper question of how representation itself arises (via an epistemic cut), nor how bare difference arises (as an ontic symmetry breaking).
  • There Are No Identities In Nature
    This is a very real issue, especially with terms of ontological or metaphysical significance. We have a conception of future and past for example. This conception models these two as pure opposition. Take a point, on one side of that point is past, the other side is future. We could build a massive epistemic structure on a conception like this. The problem is, that in the real world, and common understanding of future and past, there is an implied necessary temporal priority, past has gone by, and future is yet to come. The conception, of pure opposition, two sides of a point, fails to take this into account. Therefore any conceptual structure built on this concept is completely illusory, it fails to take into account what we are really referring to when we use the words "future" and "past".Metaphysician Undercover

    I'm not sure what you think I'm arguing here. It has been my point that we impose our frameworks of intelligibility on the world.

    But then a dialectic or dichotomous logic ensures that this process is rigorous. In being able to name the complementary limits on possibility, we have our best shot at talking about the actuality of the world, as it must lie within those (now measurable) bounds.

    So if you want to talk about "time", then it is only going to be an intelligible notion that we can project onto reality in a measurable fashion to the degree we have formed a crisply dichotomous model of it.

    For example, the classical conception of time and change developed by the dividing off of stasis and flux, being and becoming. Then space and time became a division of dimensions - if you imagine existence in terms of straight lines, then you can imagine points travelling along the lines so that first they were here, later they were there.

    Both relativity and quantum theory have since shown space and time are not so distinct and we are back to having to include energy - as the thermal source of any change - in the spatiotemporal picture. The rate of time can be relativistically bent by energy density. Time and energy form a dichotomistic uncertainty relation in quantum theory. Both even challenge the notion of before and after. Relativity permits wormholes in time. Quantum theory appears to demand some form of retrocausaliity to explain quantum eraser experiments.

    So we have a variety of ways of thinking about time - all of them models that try to impose some kind of fundamental dichotomy that would make time an intelligible, and thus measurable, concept of the thing-in-itself.

    A logic of vagueness is a further such modelling exercise. And while I might employ familiar (causal) notions like before and after, or earlier and later, to talk about semiotic development, clearly I do so in a new context - one in which any more traditional notion of temporal co-ordinates is itself going to be emergent.

    And as I say, this is not wild metaphysical hand-waving. It is where Big Bang cosmology has led. The Planck scale encodes a dichotomous or reciprocal relation between spacetime and energy density now. Planck spacetime is h x G/c, while Planck energy density is h x c/G.

    So a quanta of existence - the fundamental unity that the triadic Planck relation expresses - encodes a dichotomously matched pair of limits.

    If we think of it geometrically, spacetime is extremitised by being flat. It becomes changeless, featureless and energyless by becoming maximally stretched out in Euclidean fashion. And then energy density or change is extremitised by being hyperbolically curved or maximally fluctuating. Instead of spacetime lying flat and even with itself, now every point is pointing away from such a dimensionally regular state. It all wants to break apart in every possible "direction" as quick as it can.

    So now that view of things is thermal and allows us to understand "time" as a (dichotomous) contrast between a backdrop flatness (a Universe that has developed to become generally large and cold) and a localised curvature (the patchy clumps of energy density represented by spacetime-bending "stuff" like nebulae gas clouds, stars, planets, atoms, blackholes).

    And matter is now the source of a further temporal dichotomy (one born of the symmetry breakings of particle physics) because it introduces the new possibility of an energy density that moves about at less than lightspeed. It now "takes time" to move about because action no longer has the vanilla rate of c, the vanilla rate of radiation. Mass is instead operating within the new symmetry-breaking, the new dialectical limits, of absolute rest and lightspeed.

    So the whole notion of time - in its familiar Newtonian sense - is something that has to develop via a succession of symmetry-breakings. The kind of time you are talking about did have a prior history in which it was a different (less differentiated, and thus more vague) kind of time for quite a long time. :)
  • There Are No Identities In Nature
    Then we might be inclined to say something ridiculous like neither one of these is prior to the other, they are co-dependent.Metaphysician Undercover

    Why would it be ridiculous? Is it because the present seems necessarily prior to either the past or the future in your definition of time here?
  • Questions about cornerstones in political philosophy
    It's 2016.MrAntigone

    Hmm. Why am I getting the impression this is about as good as it is going to get regarding your grasp of the facts? :)
  • Questions about cornerstones in political philosophy
    Agamben seems to me to think the capacity for violence has something to do with political sovereignty. And I said: The United States seemingly has a lot of capacity for violence bound up with its "superpower" status.MrAntigone

    The critical issue here seems not about having a capacity for violence but creating collective ownership of that capacity. And the right framework for analysis would be the usual social science one of a social system having to balance the fruitful tensions of competition and cooperation.

    So the cornerstone assumption is that a healthy system is one in which local competition flourishes, yet the whole is regulated in long term fashion by institutionalised constraints. People should have as much individual freedom as possible, but be ruled by collectively sensible laws.

    A capacity for violence/sovereignty would be judged against that general dynamic. Should individuals at their whim have ownership of violent actions? Well we let people slash away at their gardens mostly as they please. Or blow up the world on their computer games. Or get acceptable rough in the realms of competitive sport or the competitive market place.

    But then there is ultimately a need to collectively regulate violence - naked competition - at the cooperative level of being. We have to create rules to govern marketplace or sports field behaviour.

    The cooperative top-level of social order used to span just tribes, then races and faiths. Now we have nations and ideologies claiming this level of sovereignty - a top down control over its people, but together with now a competitive attitude, a willingness to use violence, against rival nations or ideological groupings.

    So the logic is that as the world become connected at this level, the job is not done. We need something like a United Nations to take ownershIp of the capacity for collectivised violence in the name of all the folk of the planet.

    When someone like the US wants to go off an invade some oil fields, impose a little home-spun ideology on the heathen natives, it ought to be licenced by a higher form of sovereignty.

    Of course achieving this level of social integration seems a long way off - although the theory of it is completely accepted by many. So having the US as the global military superpower, the self-appointed world cop, is better than every other available alternative.

    It is because the US is on the whole is likely to act for the general good that it is safest to put the ownership of international violence largely in its hands. We can see that the self interests of the US align with our general democratic theory of how to run any flourishing society.

    So the capacity for violence is really a local competive freedom - potential of individual actors. What should emerge at the global collective scale of social organisation is the capacity for regulation of violence - a state level capacity for setting its acceptable boundaries. Unfortunately that regulatory action itself can be pretty violent - a symptom of weak democracy. And also notions of sovereignty can see a national capacity for violence being directed against competing stakes.

    So theory would say a well balanced society was one in which the level of roughness felt generally appropriate. Violence is always going to be part of the equation as competition is basic to the social dynamic. But cooperation is the other half of the story. So that aims to be a force for smoothness, without wanting to tip things over the other way in the direction of stagnation, blind habit and blandness.
  • There Are No Identities In Nature
    Wilden too puts the whole issue in terms of difference, although he doesn't employ the vocabulary of intensive/extensive: "There are thus two kinds of difference involved, and the distinction between them is essential. Analog differences are differences of magnitude, frequency, distribution, pattern, organization, and the like. Digital differ­ences are those such as can be coded into distinctions and oppositions, and for this, there must be discrete elements with well-defined boundaries..StreetlightX

    I can agree with Wilden. It is when you start pulling in Deleuze and "aesthetics" and other such baggage that it loses analytic clarity and becomes a romantic melange of allusions.

    So accepting Wilden as a valid starting point, I will focus on the further things that could be said from a (pan)semiotic point of view.

    The key thing is that reality itself is digital in being marked. To talk about analog difference is already to talk about a reality that is constrained in particular material ways. If the weather is a pattern of magnitudes - the pressure high here, low there - then already the world is divided against itself, expressing a proto-negation.

    So a pure analog state would have to be a completely bland state, one characterised by its intensive or bulk properties. It would be like the early state of the Universe when all that existed was a thermalising bath of radiation - a featureless state with the same pressure and energy density and rate of action everywhere. The Big Bang was the least possible marked state of being - a spreading ocean with no discernible texture. The only change was the change of becoming steadily larger and cooler - a change that could only be appreciated if one was standing god-like outside everything that was happening.

    Yet even the radiation-dominated era of the early Big Bang had some digital structure. Action was confined to three spatial dimensions. It was also confined to a single temporal one in the sense that all action had to flow entropically downhill - to flow uphill would be neg-entropic!

    So contra your position, existence has to start with the digitisation of the analog - a primal symmetry-breaking. Or as I say, to make proper sense of this, we have to introduce the further foundational distinction of the vague~crisp. We have to reframe your LEM-based description in fully generic dichotomy-based logic.

    So now we get to a Peircean, Gestalt or Laws of Form level of thinking where both event and context, figure and ground, particular and general, atom and void, are produced together, mutually, when a symmetry is foundationally broken. In the beginning was a vagueness, an apeiron, a quantum roil, a firstness of pure qualitative fluctuation. Then this state of unformed potential was broken, marked by its most primal distinction. In Big Bang theory, we have a reciprocal relationship between an extensive container and its intensive contents - an expanding spacetime and a cooling ocean of radiation.

    This is the really difficult to get bit. But it means that the reductionist instinct to make one aspect of being prior or more foundational than its "other" is always going to mislead metaphysical thought. Does the digital precede the analog, or the analog precede the digital? The whole point of an organic and pansemiotic conception of this kind of question is to focus on how each brings its other into concrete being. To be able to make a mark is to reveal the possibility that there is a ground to accept that mark. So before anything happens - before there is any kind of difference, analog or digital - there is only the vagueness of a potential. And then when something happens, the digital and the analog would be what co-arise as the two aspects of being which such a symmetry breaking reveals.

    Now we start to get into the difficulties with your view. As I say, the purely analog - if it is to make dialectical sense - would have to be the least digitallly marked kind of state that still have definite material being.

    So it would be like the earliest state of the Universe - a featureless and homogenous realm of the cooling~expanding. All distinctions - all negations or differences that make a difference (to someone) - would be pushed to the margins of this generic state. It would only be a god-like observer, free to take a position outside the totality of this material existence, who could make remarks like "This Universe is a colder/larger than it just was, and it is cooling/expanding at rate x rather than rate y or z." Or heading the other way in scale, remark "This Universe is featureless, except when we get down to the quantum grain, we can see it still has a residual fluctuating freedom that again is an active negation of its generalised state of constraint."

    But then of course the actual Big Bang went through its further symmetry-breaking phase transitions and matter condensed out of radiation bath. This - in dichotomistic fashion - cleared the vacuum of energy in a way that made it the other of "the void". So now we still seem to be in an analog realm, but now one with a lot more possibilities for local magnitude differences. Mass is gravitationally clumping. A new level of action is starting to play out.

    The radiation era was already digitally-broken - it had generic counterfactuality in that it only had three spatial dimensions and a single entropic gradient, etc. But now the matter-dominated era was starting to get really broken. There existed mass that could have any contingent rate of motion between the limits of rest and lightspeed. Greater digital constraint - the marking of the extremes of speed as two crisply opposed limits - had just bred new analog variety in the fact mass could travel at any rate on the spectrum of rates thus revealed.

    So you should be getting the picture. If we actually check in with the physics, we can see how analog~digital is a drama being played out in which both emerge together out of a primal symmetry-breaking. And then both evolve together as symmetry-breakings become the ground - the vaguer preconditons - for further symmetry-breakings which render the presence of the analog and the digital ever-crisper. Both aspects of nature are being strengthened because that is how the mutuality of dichotomous development works. The blacker the pencil, the whiter the paper it marks.

    Of course analog and digital were terms created for the late machine age and so are being dropped into a world with a very long history of become crisply developed in its dualistic fashion. If we look around the world of sensible objects, we see it sharply divided in terms of the continuous and the discrete, the part and the whole, the form and the matter, the flux and the stasis, the chance and the necessity, etc. That is physically how it is for us, being creatures that necessarily depend on the Universe having reached its high point of material complexity - sorted into stuff like heavy element planets bathed in the steady energy flux from a star fixed at an optimal distance.

    So what Wilden describes is the epistemic cut that underlies the further adventure that is life and mind in the cosmos. He is no longer talking about the material world in and of itself - the topic of pansemiosis. He is not talking about analog and digital in that general physicalist sense. He is now talking about symbolic representations of that materiality. And also perhaps, the evoution of that symbolism - which begins in the analogic simplicity of the iconic and indexical, and terminates in the digital crispness of the properly symbolic.

    If we are to talk about analog or iconic representation as opposed to being, then we are talking about machines like old-fashioned wax cylinders where a needle - driven by making noises into a tube - produces a wriggling groove. And then when the energy relation is reversed - the cylinder is cranked to wiggle the needle and cause the tube to utter noise - we get a playback of a trace.

    Crank the cylinder too fast or two slow, and we can have proto-negation - a funny playback that is a difference in kind in being a fictional representation rather than a realistic one. But generally, the analog representation is un-digital in being still so closely connected - as close as it can possibly be - to reversible physics.

    There is a symmetry-breaking - a one way expenditure of energy to make the recording and reduce dynamical reality (a sound of a band of minstrels singing down the tube) to an enduring negentropic memory trace. But it is a symmetrical symmetry-breaking, a shallow one, not a deep and asymmetry-producing symmetry-breaking (like a dichotomous symmetry-breaking). As I say, just turn things around so the groove drives the needle rather than the needle carving the groove, and you get back the memory you created as a dynamical performance of sound. The minstrels sing once more.

    So analog representation, or analog signal processing and analog computation, arises as the most primitive, least broken, form of memory-making. The triadic semiotic trick is all about a living/mindful system being able to internalise a view of the world - code for a set of world-regulating constraints using the machinery of a symbolic memory. And analog representation is the simplest version of that new trick. It sticks a machine - like a wax cylinder recorder - out into the world. And then exploits the physical asymmetry of a rotating cylinder and a dragging sharp point to construct a trace - a linear mark encoding a sequence of energy values.

    Just by being able to switch the direction of the energy flow - from the needle to the cylinder versus from the cylinder to the needle - is all the digitality needed. On/off, forward/backward, record/playback. Semiosis at the lowest level boils down to the physical logic of the binary switch.

    So the point is that even analog devices are digital from the get-go. What we mean by analog in this context is that they cross the semiotic Rubicon by the least possible distance. They are devices that can do "representation", but of a kind so thin or materially direct that we wouldn't call it properly symbolic, just basically iconic, or at most, indexical.

    I hope you can see how - in ignoring the fine print of a definition of analog - you have produced a great confusion in so loosely applying the analog~digital distinction to the world in general, the ontic thing-in-itself, rather than honouring its technical epistemic meaning as a way to clarify our thinking about rate independent information - the semiotic mechanism by which life and mind forms memories or representations of the world.
  • There Are No Identities In Nature
    As I commented on elsewhere in reply to Pierre, the analog is not some kind of unknowable 'thing-in-itself' which is simply 'vague'; the analog has qualities which are knowable, but simply in a different mode than that of the digital.StreetlightX

    I didn't say the analog equates to the vague, so your reply is mostly off the point.

    I said to call the thing-in-itself anything is to take a theoretical stance. And so it is the epistemic (not ontic) vagueness that we aim to pierce here. And the way we pierce it is by forming up some robust dichotomy as our best guess as to what could be the case. Doing this employing a dichotomy ensures that whatever is the case in regard to the thing-in-itself, it must logically lay somewhere within the limits we have thus rigourously defined.

    And so one such dichotomous inquiry might be to ask is the thing-in-itself continuous or discrete (or in your less crisp lingo, analog or digital)?

    So again, in explaining the epistemic cut to MU, I was talking epistemology rather than ontology (the clue was in the "epistemic cut").

    Bateson himself speaks of how analog communication works....StreetlightX

    Yes. So iconic or indexical rather than symbolic. But as I said, I don't think you are working with a well defined dichotomy in talking about analog vs digital. They are not a reciprocal pairing in the way a proper dichotomy like discrete and continuous are. That is why you want to call them contrasting modes or levels of communication or representation. There is some fudging going on there that makes for weak metaphysics.

    Of course you could always pause to examine this point, tidy it up.

    At it's base, this is what 'aesthetic' means: relating to space and time, as with Kant's 'transcendental aesthetic'.StreetlightX

    Whoa! Is that really what you have been meaning by "aesthetic". Forgive me for thinking you were using it in the more usual sense....viz:

    Aesthetics (/ɛsˈθɛtɪks/; also spelled æsthetics and esthetics also known in Greek as Αισθητική, or "Aisthētiké") is a branch of philosophy dealing with the nature of art, beauty, and taste, with the creation and appreciation of beauty.[1][2] It is more scientifically defined as the study of sensory or sensori-emotional values, sometimes called judgements of sentiment and taste.[3] More broadly, scholars in the field define aesthetics as "critical reflection on art, culture and nature."

    Gilles Deleuze is the philosopher who has perhaps attended to the specificity of analog differences with the most care, referring to them as differences of 'intensity' as opposed to digital differences of 'extensity', noting how the former necessarily underlie the latter:StreetlightX

    Yeah. That passage reads like gibberish to me so you might have to put it into your own words.

    I understand what intensive and extensive properties mean in the standard physics context. I completely don't get your attempts to argue that they somehow reflect an analog~digital distinction.

    How are energy or volume "digital" and not physically continuous in their extensibility?

    How are bulk properties like melting point and density "analog" when they have a value that doesn't change in continuous fashion?

    And how do intensive properties underlie extensive properties when instead an intensive property is formed by the ratio of two extensive properties (as in density being a ratio of mass and volume)? Is a ratio more basic than that which composes it?

    This is some baffling shit here.

    I can see how you/Deleuze might be driving at a substantialist ontology - one that takes existence to be rooted in the definiteness of material being. And so the inherent properties of substance would seem more fundamental than the relational ones.

    But that is quite a different kind of ontology to a triadic process one where - as in hylomorphism - formal constraints shape up or individuate material potential so as to produce the middle-ground actuality we know as substantial being.

    This field is intensive, which is to say it implies a distribution in depth of differences in intensity ...StreetlightX

    Again, this is unscientific horseshit. By its very definition, an intensive property is constant through-out the substance in which it is said to inhere. It can't vary in intensity without some further reason to make it so - what I would call a further informational constraint, and which you would thus have to call "discrete/digital knowledge" in the position you are advancing.

    So again, to turn back to our eternal debate, any metaphysics based on modeling relations - itself premised on discrete, digital knowledge - is derivative of a more primal aesthetic ground out of which it is born.StreetlightX

    Again this seems a weird definition of aesthetic. Even if we go now with this being a reference to necessary Kantian intuitions about Euclidean space, I think most agree that Kant screwed this bit up. And it is hardly primal, or non-conceptual/non-digital, to project the idea on to space that it is flat and infinite in a dimensioned, countable, Euclidean maths, way.

    Psychology shows that we do dichotomise spatial relations in a fairly primal and inductively-learnt fashion - a posteriori knowledge. We learn that everything we see is generally large when it is close at hand and small when it is far away .... if it is the kind of thing with a normally constant size. And spatial distance in turn relates to time and energy. If it looks close, we can probably get to it quite soon with not too much effort.

    So an embodied sense of being in the world is built up from these kinds of exploratory learnings. They are the dichotomies of experience rather than the antinomies of pure reason. :)

    So I agree with your general urge to take an enactive or embodied approach to epistemology here. Biosemiosis is indeed foundational to linguistic or mathematical semiosis. And a lot of philosophy does go in the other direction in presuming a physics-free disembodied rationality. That is why computers seem so ... deep ... to so many. They are disembodied rationality, pure syntax, brain in a vat digitalism, personified.

    And I get the general thrust of what you mean about the digital distinction. Biologists are embracing Peircean semiotics because it gets at the basis of how - in Pattee's words - rate independent information (a digital code/memory) can constrain rate dependent dynamics (the Newtonian realm of "analog" or continuously state-determined material processes).

    So these are the important points. Dissipative structure can be regulated by a machinery of memory. And this is how bodies are formed, individuals are individuated, autonomy arises.

    But Deleuze seems mostly mangled Prigogine. And Prigogine, while a genius, also was working at the level of rate dependent dynamics. He wasn't about the larger semiotic story of the epistemic cut and rate independent information. So to make Prigogine your departure point is - as with autopoiesis or dynamical systems theory - to strike out with only half the whole story.
  • There Are No Identities In Nature
    We cannot assume a proper A and not-A relation between the analog and the digitalMetaphysician Undercover

    I agree that analog~digital is probably not a proper dichotomy. They are terms that arose early on in the development of signalling technology. And so it is a little blurry whether analog - in being iconic, a direct representation of its material source - is the opposite of symbolic, or merely proto-symbolic.

    Retrospectively, we could tidy this up and find a way to define digital as 1/analog, and analog as 1/digital. But really, that is a reason I would rarely talk about analog and digital as a crucial metaphysical distinction. Discrete and continuous is simpler to understand as a rigorous dichotomy. Likewise matter and symbol. But analog~digital is a little ambiguous in comparison.

    I do not claim that we need to start in certainty, this is more like what you imply. You imply that if a thing is different from A you can establish the logical certainty of not-A of that thing...Metaphysician Undercover

    Correct. I say it is essential to by-pass uncertainty and begin with a confident positive assertion - just state an axiom or premise which has the logical form of the LEM. But that positive start is what you then seek to test. Does the guess work out in fact?

    So this is the mistake, these two, discrete and continuous, are not properly opposed and therefore are not mutually exclusive, as you imply. We have discrete colours, red, yellow, green, blue, within a continuous spectrumMetaphysician Undercover

    Given colour experience is the most unreal of mental constructions, this example is already off to the worst possible start.

    The world is not coloured red, yellow, green or blue, nor any mix of these primary hues. That much we know from basic psychophysics.
  • There Are No Identities In Nature
    Yes, I guess that is the big question. But even if the real turned out to be (that is if we could know without question that it definitely was) discrete, is it reasonable to think that discreteness could consist in absolutely precise boundaries between the fundamental units? That would seem to evoke Leibniz' Monadology.John

    Well quantum theory says reality is fundamentally uncertain - so fundamentally vague. The discrete and the continuous would then be emergent in being the crisply complementary limits on that basic indeterminism. So it is not a question of whether reality is particle-like or wave-like. Instead those are the bounding alternatives. And which you see becomes a point of view. The observer conjures up the wave or the particle, depending on the type of measurement he chooses.

    So if this is the actual ontology of the world, then it is only reasonable that it is reflected in our ideas about logic too. A deep logic is going to go beyond emergent features like continuous and discrete to connect with the indeterministic or vague.

    Perhaps all three have their different places and functions if the 'grand scheme'? I''m guessing though, that you see the other two as being subsumed and augmented by pragmatic metaphysics?John

    The grand scheme of pragmatism is triadic. So logic has three levels - firstness, secondness and thirdness. Or to talk about it more psychologically, the three things of pure monistic quality, the dyadic thing of a reaction or relation, and the third thing of mediation or habit - a hierarchically structured relation in which a memory becomes the context generally shaping events.

    Now the familiar model of logic - as encoded in the laws of thought - is all about secondness or dyadic relations between particular things and events. It is a logic of the particular, in short. It presumes the world already exists as a crisp state of affairs, a set of individuated facts. And it takes a nominalist view on abstracta or laws or any other kinds of transcendent regularity.

    It is this logic of the particular that AP-types instinctively seize on to do any metaphysics. You see that in TGW and his furrowed brow when modal logic gets challenged. The only logic that computes is the stuff which ordinary logic courses spend all their time teaching - the logic that is splendidly mechanical and a valued tool in a society that values the making of machines.

    Then we have the larger logic of Pragmatism which comes out of the long tradition of organicism and holism. This now adds in a logic of vagueness and a logic of dialectics or symmetry breaking - the firstness and thirdness in Peirce's scheme. (He also distinguished these two categories as the complementary principles of "tychism" - or absolute chance - and "synechism", or generalised continuity.)

    So now we have a logic founded in vagueness or indeterminism. Nature creatively sports possibilities. Already the principle of sufficient reason - an axiom of ordinary logic - is denied. Fluctuations can happen without limit.

    But then that unbounded and chaotic firstness contains within it the seeds of its own self-regulation. Because while indeed "everything can happen", everything that is then contradictory is going to cancel itself out. So just in trying to be completely chaotic, already firstness is on the way to being self-limiting. And anyone who knows quantum field theory will recognise Feynman's path integral or sum-over-histories logic at work here. This isn't some bit of wild-eyed metaphysics. It is exactly how physics has come to make sense of the world in the past 50 years.

    Then we go to the other thing of the dialectic or the dichotomy. This is now a logic of generality. This is how we reason to extract the plausible limits on existence itself. So as with this thread, as we abstract away the particulars, that leaves always the duality of thesis and antithesis - two possible extremum principles, both of which seem equally "true".

    So the LEM is for reasoning about particulars. An individuated thing or event has to be logically one thing or another. If it is A, then it is not not-A, and vice versa. Negation seems fundamental in this context. You have to reduce reality to descriptive binaries - and then hold one of the two options true, the other false. And as I say, as a model of secondness, a logic of particulars, it works really well. It makes for splendid machines. And even societies that think and act like machines.

    But then the dichotomy is the basis for a logic of generality. Now - following its rules requiring a separation of vague possibility into crisp actuality via a dichotomising process of mutual exclusion/collective exhaustion - we always will arrive at complementary poles on being. We have two alternatives - and both must be "true" in the sense of being ultimate bounds on possibility.

    You can head towards the two poles of "the discrete" and the "continuous", but you could never go past them - as how can the discrete be more discrete than the discrete? And you never really leave either behind either as the only way to know you are headed towards discreteness is because it is measurable - plainly visible - that you are headed away still from continuity. And vice versa. So formally, mathematically, the dichotomy encodes the asymmetry of a complete symmetry-breaking. It describes a reciprocal or inverse relation where the way to make one end bigger (or truer, more dominant, more real, more fundamental) is to make the other end smaller.

    We see this in familiar things like infinities and their reciprocal, the infinitesimal. What is the number line except an infinity of infinitesimals? That is why a number line can be both continuous and discrete at the same time - unlimitedly countable. It encodes an uncertainty relation at its base. Neither the continuous or the discrete are fundamental, merely emergent. It is the idea of the infinitesimal difference that reciprocally allows the construction of the unboundedly continuous (when it comes to counting). The infinitesimal = 1/infinity, and vice versa.

    So metaphysics got going when it discovered this logic of generality or dialectical reasoning. Ancient Greece spilled out a whole set of logical dichotomies that underpin pretty much all of the science and thought that has happened ever since.

    Now PoMo - showing the Hegelian roots of its Marxist leanings - has flirted a lot with this dialectical reasoning. So at least it knows about it. But mostly it uses dialectics to generate a play of paradox. It points out that two opposite things always seem true about nature. However instead of saying, well yes of course, and that is what leads on the Peircean thirdness of habit or hierarchical organisation, it treats that fact as some source of deep ambivalence. PoMo is - politically - anti-hierarchical. And so it prefers to conclude that the inevitability of dichotomies is instead a sign that we should somehow return to the vague source of things - the radical uncertainty in which things would be again freest.

    It might sound like it is a good thing to return to vagueness like this. But it isn't true vagueness - PoMo just doesn't have a tradition in that regard. Instead it is just another version of AP's notion of existence as an essentially random collection of events, a state of affairs composed of already individuated being.

    OK, PoMo does have some concerns about how individuation comes about in fact. But it has no logic of vagueness to work with. It's grasp of logic on the whole is sketchy and not central to its concerns. It actually quite likes the idea of Romantic irrationality as its alternative to the patently mechanical mindset of AP.

    So that is why I say Pragmatism is the only brand of metaphysics that both does pursue logic with rigour and has a large enough model of logic to talk about the whole of existence.

    AP has tunnel vision. It only wants to apply the logic of the particular with sterile relentlessness. PoMo has ADHD. It is all over the shop as to what logic really is. Only Pragmatism (as defined by Peirce) uses a formally holistic logic that comprises of three elements in interaction - a logic of vagueness, a logic of particularity, and a logic of generality.

    Though of course Peirce wasn't the end of the story. He was only a solid beginning. Our ideas about symmetry-breaking and hierarchy theory are much more mathematically developed these days. And quantum theory is rubbing our noses in the reality of indeterminism. So we can be a lot sharper about defining both vagueness and generality now.
  • There Are No Identities In Nature
    Digitization just is the introduction of precise boundaries.John

    That's right. But then there is still the issue of how they can be imposed on the world - the issue of human measurement.

    And then - where this gets radically metaphysical - there is the post-quantum issue of measurement in general.

    So through semiotics, we come to explain human understanding of the world as a triadic sign relation. And then it now seems as though the world itself is ontically pan-semiotic - a system that self-referentially measures itself into being in some concrete sense. The universe has to observe itself to "collapse the wavefunction" and have a digitally-crisp, atomistic, mechanically-determined, state of being.

    Of course we then call that classical world, that realm of continuous Newtonian dynamics, our analog reality in contrast with the digitality of our symbolic representations of that world.

    But quantum theory has re-introduced the basic metaphysical dichotomy - is existence continuous or discrete (or indeed, beyond that, indeterministic)? - at base.

    So we know how in epistemic fashion we impose intelligible order on the world in a way that makes it pragmatically measurable. But even while arriving at a fully working theory of that - as in biosemiosis - up pops the holographic bound in fundamental physics and other pansemiotic questions about how the Universe solves its own measurement problem. Where does it stand so as to resolve its own indeterminacy in globally-self referential fashion.

    Given this seems to be a debate about Analytic metaphysics vs PoMo metaphysics, as usual I would say only Pragmatic metaphysics has the proper resources to answer these kinds of questions properly. :)
  • There Are No Identities In Nature
    Accordingly, anything you might say about this analog existence, this continuum, is based only in this assumption. So in order to say anything true about the continuum, your assumption of a real existing continuum must be first validated, justified. Only by validating this assumption does the nature of the continuum become intelligible. To simply assume a continuum, and say that it is of an analog nature, and completely other than the digital, is just an assumption which is completely unjustified, until it is demonstrated why this is assumed to be the case.Metaphysician Undercover

    The semiotic relation is triadic. And this insertion of an extra step - an epistemic cut - is what gets you past this kind of problem.

    So the analog thing-in-itself is vague. It only comes to be called a continuum in crisp distinction to the digital or the discrete within the realm of symbolisation or signification. It is a logical step to insist the world must be divided into A and not-A in this fashion. And then in forming this strong, metaphysical-strength, dichotomy of possibility, it can be used as a theory by which pragmatically to measure reality. We can form the counterfactually-framed belief that reality must be either discrete or continuous, digital or analog, and then test reality against this self-describing theory.

    So the situation is the reverse of the one you paint. We don't need to begin in certainty. Instead - as Peirce and Popper argued with abductive reasoning, as Goedel, Von Neumann and others demonstrated with symbolic reflexivity in general - it can all start with a reasonable guess. We can always divide uncertainty towards two dialectically self-grounding global possibilities. The thing-in-itself must be either (in the limit) discrete or continuous. And then having constructed such a sharply dichotomised state of metaphysical certainty - a logical either/or - we have the solid ground we need to begin to measure reality against that idea of its true nature. Pragmatically, we can go on to discover how true our reasoned guess seems.

    And in Kantian fashion, we never of course grasp the thing-in-itself. That remains formally vague. But the epistemic cut now renders the thing-in-itself as a digitised system of signs. We know it via the measurements that come to stand for it within a framework of theory. And in some sense this system of signs works and so endures. It is a memory of our past that is certain enough to predict our futures.

    So the assumptions here begin in a discussion of existential possibility. If anything exists - in the spatiotemporally-extended sense that we think of as "the world" - then metaphysical logic says there are two options, two extremum principles, when it comes to how that world has definite being. Either it must be continuous or discrete, connected or divided, integrated or differentiated, relational or atomistic, morphism or structure, flux or stasis, etc, etc - all the different ways at getting at essentially the same distinction when it comes to extended being.

    And having identified two complementary limits on being - terms that are logically self-grounding because they are seen to be both mutually-exclusive and jointly-exhaustive - we can be as certain of anything we can be that reality, the vague thing-in-itself, must fall somewhere between the two metaphysical-limits thus defined. Exactly where on this now crisply-defined spectrum is what becomes the subject of measurement.

    Note that this dichotomy itself encodes both the digital and the continuous in being like a line segment - a continuous line marked by two opposing end-points.

    So anyway, the very idea of the analog~discrete is based on the more primal dichotomy of the continuous~discrete - a way of talking about reality in general. But with the analog~digital, we are now drawing attention to the general semiotic matter~symbol dichotomy - the step up in material complexity represented by life and mind.

    The analog~digital dichotomy has sprung up in computation and information theory as an ontological basis for a technology - an ontology for constructing machines rather than growing organisms. And yet, in retrospective fashion, it has now become a sharper way of getting at the essence of what life and mind are about - the semiotic modelling relation that organisms have with worlds. The analogy of the code is very useful - not least because it brings so much maths with it.

    But in a sense, the analog~digital dichotomy also overshoots its mark. It leads to the idea that modeler and modeled actually are broken apart in dualistic fashion - like hardware and software. And this leads to the breakdown in understanding here - the questions about how a continuous world can be digitally marked unless it is somehow already tacitly marked in that fashion.

    So once we start to talk about the Kantian "modeler in the world", the first step is to make this essential break - this epistemic cut - of seeing it as the rise of the digital within the analog. Material events gain the power of being symbolic acts. But then we must go on to arrive at a fully triadic model of the modeling relation. And so attention returns to the middle thing which is the informal acts of measurement that a model must make to connect with its world.

    This is what is the focus of modern biosemioticians like Pattee, Rosen, Salthe and many others like Bateson, Wilden, Spencer-Brown, and so on. What is it that properly constitutes a measurement? What is it that defines a difference that makes a difference?
  • Dennett says philosophy today is self-indulgent and irrelevant
    Questions about individuation and flourishing have an obvious logical basis in common wouldn't you say? And flourishing is a pretty practical issue too. To know what it is would be to know how to do it.
  • The intelligibility of the world
    You think consciousness is amazing, but I think Life is also amazing, and we know that Life is a physical process. It is a physical process we are beginning to understand rather well, but if you look at the physical theory that explains it, there is no mention of "say, a force particle/wave or a matter particle". It is a theory of replicators subject to variation and selection. But look - a "physical" theory of abstract objects!tom

    Except biologists themselves would say it is physics regulated by something further - symbols or information.

    The two are of course related in some fashion. But you seem to be talking right past that issue - questions like how a molecule can be a message.
  • Dennett says philosophy today is self-indulgent and irrelevant
    Many discussions about modality are confused because they don't differentiate between modal systems, don't understand the difference between epistemic and deontic modality, and so on. Modal logic itself cannot tell us about the nature of possibility, but again, a logic is a mathematical object, not a metaphysical thesis.The Great Whatever

    Sorry but modal logic bypasses the essential issue of individuation. It treats possibility as countable variety and not indeterminate potential, from the get-go.

    This is largely due to the very nature of maths of course - being the science of the already countable. Give a man a hammer, etc.
  • Dennett says philosophy today is self-indulgent and irrelevant
    Thus someone like Gilbert Simondon, for example, will write the from the perspective of individuation, "at the level of being prior to any individuation, the law of the excluded middle and the principle of identity do not apply; these principles are only applicable to the being that has already been individuated; they define an impoverished being, separated into environment and individual. …StreetlightX

    This is important. Imagine how actually useful modern metaphysics would be if it were generally focused on the central question of individuation rather than being - dynamical development rather than static existence.
  • Thesis: Explanations Must Be "Shallow"
    Or perhaps a metaphysician/scientist can or has deduced the law of gravity from a more general law (gravity is just an example, not at all my interest here). Then this "law" is itself either deduced from yet a more general "law" or itself has "just because" status. Infinite regress or bust, in other words. Hence the "shallowness if explanation."who

    But isn't what really happened that Newton made a successful simplifying generalisation? So for a start, technically, it was an induction rather than a deduction.

    Newtonian gravity made the generalisation that instead of just some things falling towards other things, everything had exactly the same propensity to fall together. And then to go beyond that Newtonian generalisation would require an even more complete generalisation - like general relativity, and after that, quantum gravity.

    But while this seems like a regress - with no end in sight - you have to take into account that generalisation can only continue so long as there are local particulars to be mopped up in this fashion.

    Newtonian gravity mopped up all the different ways objects fall by saying all mass had the same basic attractive force, so the only local difference to mention is the amount of mass in some spot. Then GR mopped up that kind of particularity in saying mass and energy were both the same general stuff, and a simpler, more general, way to model attraction was positive spacetime curvature, which handled local differences in momentum. QG would take the mopping up to a logical conclusion in putting all the difference physical forces on the one quantum field theory footing.

    So what I am saying is that the inductive explanatory regress is self-limiting. It will halt at the point where it runs out of local particulars to generalise away. That is what founds a notion of a theory of everything. It is an asymptotic approach to a limit on explanation.
  • Reality and the nature of being
    The Big Bang was apparently a singularity - a planck-length point of existence that contained anything and everything that could have ever became.Albert Keirkenhaur

    That is a common misconception - that the Big Bang starts from some particular point of spacetime and then expands to fill the whole of that spacetime.

    Instead, the Big Bang is itself the development of spacetime and so where it all "starts from" is not a location but instead a scale - the Planck scale.

    The question then is what kind of thing is the Planck scale?

    And it is as this point you have to think beyond familiar classical concepts like spacetime and energy density. Quantum theory says at the Planck scale, these two things are at unity in some way that adds up to the most radical kind of uncertainty about anything existing.
  • Reality and the nature of being
    As we know energy can not be created or destroyed, but simply re-used. So one wonders how it could possibly be that energy itself even exists at all. it's really quite the puzzle..Albert Keirkenhaur

    Does physics say the Big Bang started in a high state of energy or a maximum Planck-scale state of quantum uncertainty?

    Once the uncertainty started to sort itself into the complementary things of a fundamental action happening in a spacetime - a classical kind of realm with a thermally-cooling "contents" in a thermally-spreading "container" - then we could of course talk about one aspect of this system as being the energy, the matter, the negenentropy, etc. But that is a retrospective view from the point of view of classical ontology. And can such concepts be secure in talking about the "time" when everything was maximally quantum?
  • General purpose A.I. is it here?
    In a very abstract way Chaitin shows that a very generalized evolution can still result from a computational foundation (albeit in his model it is necessary to ignore certain physical constraints).m-theory

    I listened to the podcast and it is indeed interesting but does the opposite of supporting what you appear to claim.

    On incompleteness, Chaitin stresses that it shows that machine-like or syntactic methods of deriving maths is a pure math myth. All axiom forming involves what Peirce terms the creative abductive leap. So syntax has to begin with semantics. It doesn't work the other way round as computationalists might hope.

    As Chaitin says, the problem for pure maths was that it had the view all maths could be derived from some finite set of axioms. And instead creativity says axiom production is what is infinitely open ended in potential. So that requires the further thing of some general constraint on such troublesome fecundity. The problem - for life, as Von Neumann and Rosen and Pattee argue mathematically - is that biological systems have to be able to close their own openness. They must be abe to construct the boundaries to causal entailment that the epistemic cut represents.

    As a fundamental problem for life and mind, this is not even on the usual computer science radar.

    Then Chaitin's theorem is proven in a physics-free context. He underlines that point himself, and says connecting the theorem to the real world is an entirely other matter.

    But Chaitin is trying to take a biologically realistic approach to genetic algorithms. And thus his busy beaver problem is set up in a toy universe with the equivalent of an epistemic cut. The system has a running memory state that can have point mutations. An algorithm is written to simulate the physical randomness of the real world and make this so.

    Then the outcome of the mutated programme is judged against the memory state which simulates the environment on the other side of the epistemic cut. The environment says either this particular mutant is producing the biggest number ever seen or its not, therefore it dies and is erased from history.

    So the mutating programs are producing number-producing programs. In Pattee's terms, they are the rate independent information side of the equation. Then out in the environment, the numbers must be produced so they can be judged against a temporal backdrop where what might have been the most impressive number a minute ago is already now instead a death sentence. So that part of the biologically realistic deal is the rate dependent dynamics.
  • The intelligibility of the world
    Language is also obviously constrained by actuality, by the nature of what is experienced. It also comes to constrain that experience; it is a reciprocal or symbiotic relation between perception and conception. For me that natural primordial symbiosis consists in the reception of, response to and creation of signs, and I suspect apokrisis would agree.John

    Yep. Symbiosis is a good way to think about it. It all has the causal interdependency that an ecological perspective presumes.
  • The intelligibility of the world
    So, you are going to bypass this problem by ignoring it and go on to more answerable problems? Then you are not answering the question at hand. The naked primal experience is at hand.schopenhauer1

    You forget that I was addressing the OP, not the Hard Problem.

    But we've talked about the Hard Problem often enough. I agree that there is a limit on modelling when modelling runs out of counterfactuality. And this reinforces what I have been saying about intelligibility. To be intelligible, there must be the alternative that gets excluded in presenting the explanation. And once we get down to "raw feels" like redness or the scent of a rose, we don't have counterfactuals - like how red could be other than what it is to us.

    But up until the limit, no problem. Or all Easy Problem.

    And then - challenging your more general "why should it feel like anything?" - is my response. If the brain is in a running semiotic interaction with the world in a way that it is a model of being in that world, then why should it not feel like something? Why would we expect the brain to be doing everything that it is doing and yet there not be something that it is like to be doing all that?

    Of course it requires a considerable understanding of cognitive neuroscience to have a feeling of just how much is in fact going on when brains model worlds in embodied fashion - way and above, orders of magnitude, the most complex knot of activity in the known Universe. But still, the Hard Problem for philosophical zombie believers is why wouldn't it be like something to be a brain in that precise semiotic relation to the world? Answer me that.

    Panpsychism is a different kettle of fish. It just buries its lack of explanatory mechanism as far out of sight as possible. It says don't worry folks. Consciousness is this little glow of awareness that inhabits all matter. And that is your "explanation". Tah, dah!
  • General purpose A.I. is it here?
    You will have to forgive me if I find that line to be a rather large leap and not so straight forward as you take for granted..m-theory

    Only because you stubbornly misrepresent my position.

    So, to quote von Neumann, what is the point of me being percise if I don't know what I am talking about?m-theory

    Exactly. Why say pomdp sorts all your problems when it is now clear that you have no technical understanding of pomdp?

    Here is another video of Chaitin offering a computational rebuttal to the notion that computation does not apply to evolution.m-theory

    Forget youtube videos. Either you understand the issues and can articulate the relevant arguments or you are pretending to expertise you simply don't have.
  • General purpose A.I. is it here?
    This does not make it any clearer what you mean when you are using this term.
    Again real world computation is not physics free, even if computation theory has thought experiments that ignore physical constraints.
    m-theory

    Again, real world Turing computation is certainly physics-free if the hardware maker is doing his job right. If the hardware misbehaves - introduces physical variety in a way that affects the physics-free play of syntax - the software malfunctions. (Not that the semantics-free software could ever "know" this of course.)

    We don't have a technical account of your issue.
    It was a mistake of me to try and find a technical solution prior I admit.
    m-theory

    :-}
  • General purpose A.I. is it here?
    I don't really have time to explain repeatedly that fundamentally I don't agree that relevant terms such as these examples are excluded from computational implantation.m-theory

    Repeatedly? Once properly would suffice.

    This link seems very poor as an example of a general mathematical outline of a Godel incompleteness facing computational theories of the mind.m-theory

    Read Rosen's book then.

    Perhaps if you had some example of semantics that exists independently and mutually exclusive of syntax it would be useful for making your point?m-theory

    You just changed your wording. Being dichotomously divided is importantly different from existing independently.

    So it is not my position that there is pure semantics anywhere anytime. If semantics and syntax form a proper metaphysical strength dichotomy, they would thus be two faces of the one developing separation. In a strong sense, you could never have one without the other.

    And that is indeed the basis of my pan-semiotic - not pan-psychic - metaphysics. It is why I see the essential issue here the other way round to you. The fundamental division has to develop from some seed symmetry breaking. I gave you links to the biophysics that talks about that fundamental symmetry breaking when it comes to pansemiosis - the fact that there is a nano-scale convergence zone at the thermal nano-scale where suddenly energetic processes can be switched from one type to another type at "no cost". Physics becomes regulable by information. The necessary epistemic cut just emerges all by itself right there for material reasons that are completely unmysterious and fully formally described.

    The semantics of go was not built into AlphaGo and you seem to be saying that because a human built it that means any semantic understanding it has came from humans.m-theory

    What a triumph. A computer got good at winning a game completely defined by abstract rules. And we pretend that it discovered what counts as "winning" without humans to make sure that it "knew" it had won. Hey, if only the machine had been programmed to run about the room flashing lights and shouting "In your face, puny beings", then we would be in no doubt it really understood/experienced/felt/observed/whatever what it had just done.

    Again I can make no sense of your "physics free" insistence here.m-theory

    So you read that Pattee reference before dismissing it?

    And again it is not clear that there is an ontic issue and the hand waving of obscure texts does not prove that there is one.m-theory

    I can only hand wave them if you won't even read them before dismissing them. And if you find them obscure, that simply speaks to the extent of your scholarship.

    I did not anticipate that you would insist that I define all the terms I use in technical detail.
    I would perhaps be willing to do this I if I believed it would be productive, but because you disagree at a more fundamental level I doubt giving technical detail will further or exchange.
    m-theory

    I've given you every chance to show that you understand the sources you cite in a way that counters the detailed objections I've raised.

    Pompd is the ground on which you said you wanted to make your case. You claimed it deals with my fundamental level disagreement. I'm waiting for you to show me that with the appropriate technical account. What more can I do than take you at your word when you make such a promise?
  • The intelligibility of the world
    You're saying that logic constrains thinking, and that is false, because you are making logic, which is a passive tool of thought, into something which actively constrains thought.Metaphysician Undercover

    A tool is a effective cause. A logical constraint is a formal cause. So you are confusing your Aristotelean categories here.

    But logic is not a "passive tool of thought"; on the contrary we cannot think cogently without it. IJohn

    I agree. It is the structural grounding that makes it even possible to act in a "thoughtful" way.

    Of course you can go back before the development of formal language, and even grammatical speech, and argue that animals think without this "tool".

    Yet in fact if you check the very structure of the brain, it is "logical" in a general dichotomistic or symmetry-breaking sense. It has an architecture that is making logical breaks at every point of its design.

    It starts right with the receptive fields of sensory cells. They are generally divided so that their firing is enhanced when hit centrally, and their firing is suppressed by the same stimulus hitting them peripherally. And then to balance that, a matching set of cells does the exact reverse. This way, a logically binary response is imposed on the world and information processing can begin.

    Then even when the brain becomes a big lump of grey matter, it still is organised with a dichotomous logic - all the separations between motor and sensory areas, object identity and spatial relation pathways, left vs right hemisphere "focus vs fringe" processing styles, etc.
  • Ignoring suffering for self-indulgence
    If you care about suffering, you will do something about it.darthbarracuda

    But while I arguably can't help but care about my suffering, why should I "have to" care about yours? So phrased this way, you already presume empathy as a brute fact of your moral economy?

    For me (and I think for most everyone else who isn't lacking in compassion and empathy - i.e. sociopaths, psychopaths, selfish individuals, most politicians, etc.), it seems wrong to ignore someone who just broke their leg down the block and is screaming in pain...darthbarracuda

    So yes. There is something bio-typical and evolutionarily advantageous about empathy. We can even point to the neurochemistry and brain architecture that makes it a biologically-unavoidable aspect of neurotypical human existence.

    But what then of those who are wired differently and lack such empathy. Is is moral that they should ignore such a situation, or exploit the situation in some non-empathetic fashion? If not, then on what grounds are you now arguing that they should fake some kind of neurotypical feelings of care?

    So in general I think there really is no other position to take other than to accept that those who are worse-off than we are should be sought out and helped to the best of our abilities - in other words, if the cost of us helping them is reasonably lower than the relief the victim experiences, we have a moral obligation to do so.darthbarracuda

    But that can't follow if you begin with this notion of "I care". It doesn't deal with the people who don't actually care (through no fault of their own, just bad genetic luck probably exacerbated by bad childhood experience).

    So to justify a morality based on neurotypicality is not as self-justifying as you want to claim. A consequence of such a rigid position is clearly eugenics - let's weed the unempathetic out.

    Of course we instead generally take a more biologically sound approach - recognise that variation even on empathy is part of a natural spectrum. Degrees in the ability to care are neurotypically normal. Where intervention is most justified is in childhood experience - get in there with social services. And also consider the way that "normal society" in fact might encourage un-empathetic behaviours. Then for the dangerously damaged, you lock them away.

    So to make care central, you have to deal with its natural variety in principled fashion - as well as the fact that this is essentially a naturalistic argument. Is is ought. Because empathy is commonplace in neurodevelopment, empathy is morally right.

    This leads to uncomfortable/guilty conclusions that I think modern ethicists have made an entire speculative field out of to try to mitigate: essentially much of modern ethics ends up being apologetics for not doing enough, or being a lazy, selfish individual, i.e. justifying inherent human dispositions as if they are on par with our apparent moral obligations.darthbarracuda

    From a psychological point of view, getting out and involved in ordinary community stuff is the healthy antidote to the deep pessimism that an isolationist and introverted lifestyle will likely perpetuate.

    So it is quite wrong - psychologically - to frame this in terms of people being lazy and selfish (as if these were the biologically natural traits). Instead, what is natural - what we have evolved for - is to live in a close and simple tribal relation. And it is modern society that allows and encourages a strong polarisation of personality types.

    The good thing about modern society is that it allows a stronger expression of both introversion and extraversion - the most basic psychodynamic personality dimension. And then that is also a bad thing in that people can retreat too far into those separate styles of existence.

    ....and most of all the complete abandonment of one's own personal desires in order to help others.darthbarracuda

    So from one extreme to the other, hey?

    I think you have to start with the naturalistic basis of your OP - that we neurotypically find that we care about the suffering (and happiness) of others. And then follow that through to its logical conclusions. And this complete individual self-abnegation is not a naturalistic answer. It is not going to be neurotypically average response - one that feels right given the way most people feel.
  • The intelligibility of the world
    Could someone explain to me what is wrong with the homuncular approach? People speak as if this is some big fallacy, but until the homuncular approach is proven wrong, why should we be afraid of it?Metaphysician Undercover

    Infinite regress. An explanation endlessly deferred is an explanation never actually given.
  • General purpose A.I. is it here?
    Agency is any system which observes and acts in it's environment autonomously.m-theory

    Great. Now you have replaced one term with three more terms you need to define within your chosen theoretical framework and not simply make a dualistic appeal to standard-issue folk ontology.

    So how precisely are observation, action and autonomous defined in computational theory? Give us the maths, give us the algorithms, give us the measureables.

    The same applies to a computational agent, it is embedded with its environment through sensory perceptions.m-theory

    Again this is equivocal. What is a "sensory perception" when we are talking about a computer, a syntactic machine? Give us the maths behind the assertion.

    Pattee must demonstrate that exact solutions are necessary for semantics.m-theory

    But he does. That is what the Von Neumann replicator dilemma shows. It is another example of Godelian incompleteness. An axiom system can't compute its axiomatic base. Axioms must be presumed to get the game started. And therein lies the epistemic cut.

    You could check out Pattee's colleague Robert Rosen who argued this point on a more general mathematical basis. See Essays on Life Itself for how impredicativity is a fundamental formal problem for the computational paradigm.

    http://www.people.vcu.edu/~mikuleck/rosrev.html

    I also provided a link that is extremely detailed.m-theory

    The question here is whether you understand your sources.

    Pompdp illustrates why infinite regress is not completely intractable it is only intractable if exact solutions are necessary, I am arguing that exact solutions are not necessary and the general solutions used in Pomdp resolve issues of epistemic cut.m-theory

    Yes, this is what you assert. Now I'm asking you to explain it in terms that counter my arguments in this thread.

    Again, I don't think you understand your sources well enough to show why they deal with my objections - or indeed, maybe even agree with my objections to your claim that syntax somehow generates semantics in magical fashion.

    I can make no sense of the notion that semantics is something divided apart from and mutually exclusive of syntax.m-theory

    Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps.

    To account for the competence of AlphaGo one cannot simply claim it is brute force of syntax as one might do with Deepblue or other engines.m-theory

    But semantics is always built into computation by the agency of humans. That is obvious when we write the programs and interpret the output of a programmable computer. With a neural net, this building in of semantics becomes less obvious, but it is still there. So the neural net remains a syntactic simulation not the real thing.

    If you want to claim there are algorithmic systems - that could be implemented on any kind of hardware in physics-free fashion - then it is up to you to argue in detail how your examples can do that. So far you just give links to other folk making the usual wild hand-waving claims or skirting over the ontic issues.

    The Chinese room does not refute computational theories of the mind, never has, and never will.
    It is simply suggests that because the hardware does not understand then the software does not understand.
    m-theory

    Well the Chinese Room sure felt like the death knell of symbolic AI at the time. The game was up at that point.

    But anyway, now that you have introduced yet another psychological concept to get you out of a hole - "understanding" - you can add that to the list. What does it mean for hardware to understand anything, or software to understand anything? Explain that in terms of a scientific concept which allows measurability of said phenomena.
  • The intelligibility of the world
    Then use "sense" or basic perception if experience is too vague or too complex a notion for your material cause.schopenhauer1

    You miss the point. No matter how we might refer to dasein or whatever, in pointing to it, we are already constructing a conceptualised distance from it. We are introducing the notion of the self which is taking the view of the thing from another place.

    So even phenomenology has an irreducible Kantian issue in thinking it can talk about the thing in itself which would be naked or primal experience. Any attempt at description is already categoric and so immediately into the obvious problems of being a model of the thing. You can't just look and check in a naively realistic way to see what is there. Already you have introduced the further theoretical constructs of this "you" and "the thing" which is being checked.

    Oh come now. A baby or animal doesn't have brute fact experiences? It only becomes experience through some sort of linguistic filter? Blah.schopenhauer1

    Again, to talk about animals having just brute fact experiences is both a convincing theoretical construct, but still essentially a construct.

    How do we imagine it to be an aware animal? Using reason, we can say it is probably most closely like ourselves in a least linguistic and self-conscious state - like staring out the window in a blank unthinking fashion. So we can try to reconstruct a state that is pre-linguistic. It doesn't feel impossible.

    But the point of this discussion is that it is humans that have a social machinery for structuring experience in terms of a logical or grammatical intelligibility. We actually have an extra framework to impose on our conceptions and our impressions.

    This is why there is an issue of how such a framework relates to the world itself. Is the machinery that seems epistemically useful for structuring experience somehow also essentially the same machinery by which the world ontically structures its own being? Is logic an actual model of causality in other words?

    You have to explain that better to be relevant in the conversation.schopenhauer1

    Or you have to understand better to keep up with the conversation. Definitely one or the other. :)