Comments

  • General purpose A.I. is it here?
    Semantics cannot exist without syntax.
    To implement any notion of semantics will entail syntax and the logical relationships within that syntax.
    To ground this symbol manipulation simply means to place some agency in the role of being invested in outcomes from decisions.
    m-theory

    Great. Now all you need to do is define "agency" in a computationally scalable way. Perhaps you can walk me through how you do this with pomdp?

    A notion of agency is of course central to the biosemiotic approach to the construction of meanings - or meaningful relations given that this is about meaningful physical actions, an embodied or enactive view of cognition.

    But you've rejected Pattee and biosemiotics for some reason that's not clear. So let's hear your own detailed account of how pomdp results in agency and is not merely another example of a computationalist Chinese Room.

    How as a matter of design is pomdp not reliant on the agency of its human makers in forming its own semantic relations via signs it constructs for itself? In what way is pomdp's agency grown rather than built?

    Sure, neural networks do try to implement this kind of biological realism. But the problem for neural nets is to come up with a universal theory - a generalised architecture that is "infinitely scalable" in the way that Turing computation is.

    If pomdp turns out to be merely an assemblage of syntactic components, their semantic justification being something that its human builders understand rather than something pomdp grew for itself as part of a scaling up of a basic agential world relation, then Houston, you still have a problem.

    Every time some new algorithm must be written by the outside hand of a human designer rather than evolving internally as a result of experiential learning, you have a hand-crafted machine and not an organism.

    So given pomdp is your baby, I'm really interested to see you explain how it is agentially semantic and not just Chinese Room syntactic.
  • The intelligibility of the world
    How is the panpyschist that different from a pragmatic semiotic theorist if both take experience as a brute fact?schopenhauer1

    I would put "experience" in quote marks to show that even to talk about it is already to turn it into a measurable posited within a theoretical structure.

    So the main difference is that you are taking experience as a brute fact. Essentially you are being a naive realist about your phenomenological access. Qualia are real things to you.

    I would take qualia as being the kinds of facts we can talk about - given a suitable structure of ideas is in place.

    Your approach is illogical. Either it is homuncular in requiring a self that stands outside "the realm of brute experience" to do the experiencing of the qualia. Or the qualia simply are "experiential", whatever the heck that could mean in the absence of an experiencer.

    My way is logical. It is the global structure of observation that shapes up the appearance of local observables. And these observables have the nature of signs. They are symbols that anchor the habits of interpretation.

    So in talking about qualia - the colour red, the smell of a rose - this is simply how pixellating talk goes. It is something we can learn to do by applying a particular idea of experience to the business of shaping up experience's structure. If I focus hard in the right way, I can sort of imagine redness or a rose scent in a disembodied, elemental, isolated, fashion as the qualia social script requires. I can perform that measurement in terms of that theory and - ignoring the issues - go off believing that a panpsychic pixels tale of mind is phenomenologically supported.
  • The intelligibility of the world
    Well, that is not sensation, that is the structure in which sensation works within, not the sensation itself.schopenhauer1

    So you say. But good luck with a psychology which is not focused on a structure of distinctions as opposed to your panpsychic pixels.
  • The intelligibility of the world
    I don't get how logic is sensation then. I'm all ears.schopenhauer1

    It is the structure of sensation. And sensation without structure feels like nothing (well, like vagueness to be more accurate).

    So if the world is logically structured, then that is the structure sensation needs to develop to be aware of the world.

    And the world itself must be logically structured as how else could it arrive at an organisation that was persistent and self-stable enough for there to be "a world", as opposed to a vague chaos of disorganised fluctuations?
  • The intelligibility of the world
    Also, I think you might find interest in at least some of what the analytics have to say, particularly Koslicki, Loux, Lowe and Tahko (hard-core hylomorphist neo-Aristotelians).darthbarracuda

    Any secondary literature that talks about my primary interests - Anaximander, Aristotle and Peirce - is going to be interesting to me. And the secondary literature around Aristotle is of course vast. He is the context for metaphysics, so every camp has to have something to say on that.

    But we have strayed away from the OP.

    The speculative/contentious point that I make there is the one that is represented by Anaximander and Peirce, rather than Aristotle. And that is that the Cosmos is intelligible because it itself represents a creative process that can be understood as the bootstrapping development of intelligibility.

    So as a metaphysical position, it is "way out there". :)

    But also, it is a holistic way of thinking about existence which is pretty scientific now.

    So systems science or natural philosophy is an Aristotelean four causes tradition that indeed detours through German idealist philosophers like Schelling. And then Peirce makes the connection between symbol and matter as the way to operationalise the four causes in the way modern science can recognise. Formal and final purpose become top-down constraints that shape bottom-up material and effective freedoms. And constraints become the symbolised part of nature - the information that is the memory of a system or dissipative structure.

    So the intelligibility of nature is a consequence of nature itself being a fundamentally semiotic or "mind-like" process. That is why Peirce described existence as the generalised growth in reasonableness.

    But calling it mind-like is really only to stress how far out of Kansas we are when it comes to standard issue reductionist realism which only wants to acknowledge a reality born of material and efficient cause. So calling it mind-like isn't to invoke a phenomenological notion or mind, nor the dualist notion of mind, but instead semiotics own idea of mindfulness, which is quite different in its own way metaphysically.
  • The intelligibility of the world
    The late E.J. Lowe, Jonathan Schaffer, Tuomas Tahko, Ted Sider, Susan Haack, Michael J. Loux, the late David Lewis, Peter van Inwagen, Timothy Williamson, Amie Thomasson, Sally Haslanger, David Chalmers, Kit Fine, D. M. Armstrong, Trenton Merricks, Eli Hirsch, Ernest Sosa, Daniel Korman, Jaegwon Kim, etc.

    The analytics.
    darthbarracuda

    Yep. Most of those I would be in deep disagreement with. But now because they represent the reductionist and dualistic tendency rather than the romantically confused.

    That is why I am a Pragmatist. As I said, reductionism tries to make metaphysics too simple by arriving at a dichotomy and then sailing on past it in pursuit of monism. The result is then a conscious or unwitting dualism - because the other pole of being still exists despite attempts to deny it.

    You read Heidegger, Husserl, the idealists?darthbarracuda

    Not with any great energy. I'm quite happy to admit that from a systems science standpoint, it is quite clear that the three guys to focus on are Anaximander, Aristotle and Peirce. Others like Kant and Hegel are important, but the ground slopes away sharply in terms of what actually matters to my interests.
  • The intelligibility of the world
    Also, contemporary realist metaphysics is largely concerned with ontology and not with the broader metaphysical stories.darthbarracuda

    Again, who are you talking about in particular?

    It's far more conservative than your version of metaphysics, with the only notable things I can think of being discussions of supervenience, grounding, causality and semantic meaning.darthbarracuda

    What you might be talking about just keeps getting muddier to me.
  • The intelligibility of the world
    I'd still like to know what you think are examples of bad metaphysics.darthbarracuda

    It's hard to be particular because the ways of expressing the generalised confusion of romanticism are so various. But anything panpsychic like Whitehead, or aesthetic like SX cites. I don't mind theistic approaches because they stick to a Greek framework of simplicity and so can deal with the interesting scholarly issues - right up to the point where God finally has to click in.
  • The intelligibility of the world
    What is this particular way? The semiotic trifold?darthbarracuda

    That is what I argue is the most penetrating model of it, yes.
  • The intelligibility of the world
    What legitimate differences are there between your conception of metaphysics and theoretical physics?darthbarracuda

    As I've already said, I see metaphysics and science as united by a common method of reasoning - the presumption the world is intelligible because it is actually rationally structured in a particular way.

    So the only possible other choice - given that method has become so sharply defined and unambiguous - is whatever is its sharp "other". And I am afraid we do see that other showing its Bizzaro head and claiming to be doing Bizzaro metaphysics (and also crackpot science, of course).

    Nobody pays you to think about the world, they pay you for results that can be applied to the economy in some way, and everyone's gotta pay the bills.darthbarracuda

    That is sadly true on the whole as I say. Even philosophy and fine art courses push the modern marketability of the critical thinking skills they teach.

    But still, if we are talking about who is best equipped to do metaphysical-strength thinking these days, that is a different conversation.
  • The intelligibility of the world
    I don't really understand what you have in mind when you say "romanticism" or "PoMo". Do you not appreciate Spinoza, Descartes, Husserl, Heidegger, etc? Only some? Only those who aren't easily fitted into your pragmatism?darthbarracuda

    All celebrated figures are celebrated for some reason. So I wouldn't dismiss anyone or any movement out of hand. But yes, I am saying something much stronger than merely that romanticism does not fit easily with rationalism. I'm saying it is the maximally confused "other" of rationalism.

    And pragmatism - if understood properly - is the best balance of the realist and idealist tendencies in philosophy. So it already incorporates phenomenology, or the irreducibility of being in a modelling relation with the world, in its epistemology.

    Science - as a method - isn't naive realism or even bald empiricism. It is rational idealism. It is a method that starts by accepting knowledge is radically provisional, and then working out how to proceed from there.

    Well, yes and no. If measurement is the only way of understanding the world (what I see as empiricism), then either is must be shown how philosophy utilizes measurement, or it must be seen with skepticism.darthbarracuda

    Do you think philosophy could have got going if philosophers were blind, deaf and unfeeling? Of course measurement is already involved in having sensations of the world.

    The point of philosophy is that ideas and perceptions are so biologically and culturally entangled with each other in ordinary life. So as a method, it works to separate these two aspects of the modelling relation from each other. It started by showing sensation (biological measurement) could be doubted, just as beliefs (cultural ideas) could be doubted.

    Then eventually this evolved into science where acts of measurement - turning an awareness of the world into numbers read off a dial - became the "objective" way to operate. But calling measurement objective is a little ironic given that it is so completely subjective now in being dependent on understanding the world only in terms of dial readings. Science says, well, if in the end there is only our phenomenology, our structure of experience, then lets make even measurement something consciously a phenomenological act.

    Usually philosophy utilizes things like counterfactual reasoning, thought experiments, etc. Other fields use these as well. These are generally "fuzzy" in their nature, though. When a philosopher thinks up something like, let's say, Neo-Platonism, it's extremely abstract and fuzzy.darthbarracuda

    If we are talking about metaphysics, there is nothing fuzzy about its reasoning method. The dichotomy or dialectic says quite simply that possibility must divide into either this or that - two choices that can be seen to be mutually exclusive and jointly exhaustive.

    The only thing "fuzzy" is that people then take up different positions about the result of this primary philosophical act. You can treat a dichotomy as either a problem - only one possibility can be true, the other must be false. Or the opposite to such monism is to embrace the triadic holism that resolves the division - adopt the hierarchical view where dichotomies are differentiations that also result then in integration. In splitting vague possibility apart into two crisply complementary things, that then is what becomes the basis of an existence in which the contrasts can mix. The world is the everything that can stand between two poles that represent mutually-derived extremum principles.

    In other words, a constraint is a totally different kind of thing from a zebra. The latter is studied by biologists, the former (as it is-itself) the metaphysician.darthbarracuda

    WTF? Have you ever taken a biology class? Are you so completely unaware of the impact that science's understanding of constraints has had on metaphysics? Next you will be saying Newton and Darwin told us a lot about falling apples and finch beaks, and contemporary philosophy shrugged its shoulders and said "nah, nothing to see here folks".

    I'm referring to contemporary realist analytic metaphysics.darthbarracuda

    It's true that those employed in philosophy departments struggle to produce anything much that feels new these days. The real metaphysics of this kind is being done within the theoretical circles of science itself. The people involved would be paid as scientists.

    Yet starting with Ernst Mach, there is a real tradition of encouraging a useful level of interaction. And analytic types fit in pretty well as interpreters, critics and synthesisers. At the bleeding edge of ideas, any academic boundaries are in practice rather porous.

    I think you may just have an idea that science is somehow basically off track and you need a metaphysical revolution led by philosophers to rescue it.

    So instead you see a world where science charges along, and metaphysicians look more like sucker fish hitching a ride, picking off some crumbs. And because it doesn't match your preconception, you read that picture wrong.
  • The intelligibility of the world
    There's different methods within this broad "scientific" account you presented. If you're an astronomer, you'll use a telescope. If you're a microbiologist, you'll use a microscope. If you're a chemist, you'll use a thermometer and a plethora of other expensive equipment; same goes for practically any scientific field.darthbarracuda

    Yes, the business of measurement is various.

    But I thought you were saying there are other methods of seeking intelligibility itself - methods that aren't just the general method of scientific reasoning.

    Again, my position is that the world is intelligible - it is actually is structured in terms of constraints and freedoms, global rules that shape local instances.

    And so it is not surprising that once human thinking aligns with that - once that is our conscious method of inquiry - then we find the world to be surprisingly easy to make sense of.

    And on this score, science is just applied metaphysics. It is a historical continuation of a method to its natural conclusion. Science has just taken the intelligible categories of Greek metaphysics - the dichotomous questions like is existence atomistic or is it holistic - and polished up the mathematical expression of the ideas, and the ability to then check them through a process of supporting measurements.

    You can rightfully point out that the purpose for even thinking this way about existence is a further matter of complication.

    The point about metaphysical/scientific reasoning is that it is meant to be dispassionate. It is meant to be the view of reality that transcends any particular human or social interests. By replacing gods, spirits, customs and values with a naked system of theory and measurement, the thought was that this would allow the Cosmos to speak its own truth, whatever that might be. We would see its reality unfiltered.

    But of course it is really difficult in fact to suppress all our own natural interests when investigating the world. It is obvious that even science embeds a strong human interest in gaining a mechanical/technological control over material existence. So science, in practice, is not as dispassionate as it likes to pretend.

    But still, the reasoning method is designed to let the Cosmos speak for itself as much as might be possible. It is objective in offering ways to take ourselves out of the equation as much as we let it.

    So then, on that score, scientific reasoning conjures up its own Romantic other. If cosmological reasoning - the kind that targets intelligible existence - has the goal of being dispassionate, then of course that opens the door to the notion of a counter-method based on being humanly passionate in trying to answer the same questions.

    So everything reason does, Romanticism would want to do the opposite.

    Instead of objectivity, let's have maximum subjectivity. Instead of careful measurement of the world, now any imagined idea about the world is good enough. Instead of the formal mathematical expression of ideas, let's try opaque poetic grandiloquence. Instead of expecting global intelligibility, let's expect global incoherence.

    So it is an inevitable part of rationality's success at developing itself into a tight self-supporting methodology that it should also, automatically, produce its Bizarro world other.

    I guess on that score, science could be said to have only room for the one method, modern philosophy - having less culturally patrolled boundaries - certainly has room for the two.

    But that is my analysis of the variety of methods that might exist in philosophy. I haven't heard what other methods of "reasoning" you have in mind when it comes to the standard issue approach of intelligibility-seeking metaphysics.

    The point being made, though, is what exactly is the subject matter of philosophy, in particular metaphysics, that makes it a legitimate attempt to understand the world, and why this subject matter is usually unable to be studied by more..."mainstream" science.darthbarracuda

    So it is important to you that there be a difference? Are you seeking to erect a cultural fenceline even if it need not exist? This is what I find weird about your stance.

    Or I guess not. It is daunting if it is the case that to do metaphysics in the modern era requires one to actually have a deep knowledge of science and maths as well. That's a lot of work.

    There aren't really any "discoveries" within metaphysics, just explanations of what we already see on a day-to-day basis.darthbarracuda

    Nope. That seems an utterly random statement to me. Do you have an example of current metaphysics papers of this kind?
  • The intelligibility of the world
    ...there seems to be more than one method of understanding the world.darthbarracuda

    So apart from "scientific" reasoning - a process of guessing a general mechanism, deducing its particular consequences, then checking to see if the behaviour of the world conforms as predicted - what are these other methods? Can you explain them?

    To say the world is intelligible is to say it is structured in terms of local instances of global rules. And so any method is going to boil down to seeking the global rules that can account for local instances. Where's the variety there?
  • The intelligibility of the world
    Here's a definition of self-organization I came across at BusinessDictionary.com: "Ability of a system to spontaneously arrange its components or elements in a purposeful (non-random) manner, under appropriate conditions without the help of an external agency."

    There are a number of questionable issues here.
    Metaphysician Undercover

    So this is an example of how science does think through its metaphysics. As already said to you in other threads where you have rabbited on about the nature of purpose, a naturalistic systems view demystifies it by talking about final cause in terms of specific gradations of semiosis.

    {teleomaty {teleonomy {teleology}}}.

    Or in more regular language, {propensity {function {purpose}}}.

    So we would have a mere physico-chemical level of finality as a propensity, a material tendency. A bio-genetic level of finality would be a function, as in an organism. And then a psycho-linguistic level of finality would be that which we recognise in a thinking human.

    See: http://www.cosmosandhistory.org/index.php/journal/article/view/189/284
  • The intelligibility of the world
    But the traditionalist account of intelligibility was such that it conveyed the sense of a complete, (if you like illuminated) understanding, in the sense of there no longer being any shortcoming or gap between the understanding and the thing understood.Wayfarer

    The Greeks were naturally stunned at finding that mathematical arguments have the force of logical necessity. If we take certain geometric axioms as unquestionable truths, then a whole bunch of incontrovertible results follow deductively.

    It was literally the creation of a machinery of thought. And rather than some spiritual illumination, it was a Philosophism (as a precursor to Scientism). :) Plato was the Dawkins of his day to the degree that he reduced the world to a literal abstraction. A perfect triangle or perfect sphere was something real and substantial that could be grasped via the rationality of the mind - and as an idea, acted to form up the imperfect matter of the world.

    So this worshipful approach to the awe of mathematical reason - the demonstration that axiom-generated truths looked to explain the hidden regularity of nature - was understandable as a first reaction. But we've since also learnt that maths is only as good as the assumptions contained in its axioms. So maths itself is no longer quite so magical, just pragmatically effective. Yet also our connecting of maths to the world via the scientific method has developed so much that the essential wonder - that existence is intelligible in this pragmatic modelling fashion - persists.

    Is no longer amazing that the Cosmos is intelligible - it has to be just to exist as a self-organised state of global regularity. But it is amazing that we can really get at that structure through the dynamic duo of maths and measurement.

    Or where it becomes less amazing again, we should qualify it by mentioning that humans naturally favour the knowledge that pays its own way in terms of serving humanity's most immediate interests. Which is where Scientism and reductionism comes in - the narrower view of causation that produces all our technology (including our political and economic "technology").

    Both philosophy and science are not big fans of holism. The great metaphysical system builders like Peirce and Hegel are held in deep suspicion. Neither AP nor PoMo likes grand totalising narratives. The idea that reality might be a reasonable place - actually driven by the purpose of becoming organised - is as unfashionable as it gets ... because society wants the machine thinking that creates the machines it is now dependent upon. He who pays the piper, etc.
  • The intelligibility of the world
    I think 'intelligible' traditionally relates to ordinary speech, not to philosophical discourse, and means that we can make out what the person is trying to communicate.andrewk

    Given this is a philosophy board and the OP was clearly meaning to apply the philosophical usage, talking instead about issues of ordinary language comprehension is an unhelpful sidetrack.

    I'll post the Wiki definition if it helps....

    In philosophy, intelligibility is what can be comprehended by the human mind in contrast to sense perception. The intelligible method is thought thinking itself, or the human mind reflecting on itself.

    Plato referred to the intelligible realm of mathematics, forms, first principles, logical deduction, and the dialectical method. The intelligible realm of thought thinking about thought does not necessarily require any visual images, sensual impressions, and material causes for the contents of mind.

    Descartes referred to this method of thought thinking about itself, without the possible illusions of the senses. Kant made similar claims about a priori knowledge. A priori knowledge is claimed to be independent of the content of experience.

    So the metaphysical surprise is that reality is logically structured. It appears to conform to the laws of thought. The world seem to operate with order and reason - regulated by formal/final cause or abstract rational principles.

    Traditionally, this seemed such a surprise that it was mystical. A transcendent cause of order seemed necessary because nature itself is naturally messy, with an ever-present tendency towards disorder.

    But now - through science and maths - we have discovered how structure in fact arises quite naturally in nature through fundamental principles of thermodynamic self-organisation. Disorder itself must fall into regular patterns for basic geometric reasons to do with symmetries and symmetry-breakings.

    So the intelligibility of the Cosmos is far less of an issue these days. We have things like selection principles and least action principles that explain the emergence of order even from randomness.
  • General purpose A.I. is it here?
    If the mind is something you can be sure that you have and that you can be sure correctly each time you inquire about the presence of your own mind...this would mean the term mind will be something that is fundamentally computational.m-theory

    Yeah. I just don't see that. You have yet to show how syntax connects to semantics in your view. And checking to see that "we" are still "a mind" is about an irreducibly semantic act as you could get, surely? So how is it fundamentally syntactical? Or how is computation not fundamentally syntax? These are the kinds of questions I can't get you to engage with here.

    Out of curiosity, why did you cite partially observable Markov decision processes as if they somehow solved all your woes? Did you mean to point to some specific extra they implement which other similar computational architectures don't - like having an additional step that is meant to simulate actively disturbing the world to find if it conforms with predictions?

    It seems to me still that sure we can complexify the architectures so they add on such naturalistic behaviours. We can reduce creative semantics to syntactically described routines. But still, a computer is just a machine simulating such a routine. It is a frozen state of habit, not a living state of habit.

    A real brain is always operating semantically in the sense that it is able to think about its habits - they can come into question by drawing attention themselves, especially when they are not quite working out during some moment.

    So as I've said, I agree that neural network architectures can certainly go after biological realism. But an algorithm is canonically just syntax.

    Turing computation must have some semantics buried at the heart of its mechanics - a reading head that can interpret the symbols on a tape and do the proper thing. But Turing computation relies on external forces - some clever hardware designing mind - to actually underwrite that. The computation itself is just blind, helpless, syntax that can't fix anything if a fly spot on the infinite tape makes a smudge that is somewhere between whatever is the symbol for a 1 and a 0.

    So to talk about AI in any principled way, you have to deal with the symbol grounding problem. You have to have a working theory of semantics that tells you whether your syntactic architecture is in some sense getting hotter or colder.

    A hand-wavy approach may be quite standard in computer science circles - it is just standard practice for computer boffs to over-promise and under-deliver. DARPA will fund it anyway. But that is where philosophy of mind types, and theoretical biology types, will reply it is not good enough. It is just obvious that computer science has no good answer on the issue of semantics.
  • The intelligibility of the world
    But one can have an a-utility understanding. For example: you understand that Gandalf loves his Hobbits. This is true understanding, but it is also useless understandingIVoyager

    Of course you would have to have useless understandings. That is what justifies talking about the contrary of a useful understanding. Again, this is how we render the world intelligible - A exists because not-A exists to make the existence of A crisply a fact.
  • The intelligibility of the world
    Now I'm skeptical of science alone being able to answer these questions, as if it can operate without a rudimentary metaphysical structure, but what remains to be shown is why this is the case - that is to say, why some questions are empirical and other apparently not.darthbarracuda

    It is a faulty binary to go about saying science is empirical, philosophy is rational, therefore the two are mutually exclusive. Sure, you can advance that theory of the world in a way that makes it intelligible for you. But measurement should demonstrate the faultiness of such reasoning.

    You yourself just said Schopenhauer was a rather empirical chap. And science is a deeply metaphysical exerercise, explicit in making ontic commitments to get its games going.

    So you are applying the method by which we attempt to achieve intelligibility - trying to force through some LEM based account of the world. But you are failing to support it with evidence.
  • The intelligibility of the world
    Note the mention of worth/value, which is a sort of ineffable ground.who

    Yes, it is important to a proper understanding of pragmatism - the original Peircean version rather than the popularised Jamesian one - that is isn't simply a presumption of some utilitarian ground of value. What it means to "work" - to serve a purpose - is also up for discussion as part of the epistemology. So it is really a claim about the value of a general reasoning method.
  • The intelligibility of the world
    If it is indeed the case that science has an epistemology, then this just further shows how philosophy is a separate and prior domain.darthbarracuda

    Why the snobbery? Historically, science has clearly been philosophy's best and sharpest expression of itself. It's pragmatism deals with idealism/realism in systematic self-grounding fashion.

    You seem to miss the whole point of intelligibility. It is about constraining possibility so that it leaves you with a crisp framework of yes/no binary questions about existence. And once you have a theory expressed in counterfactuals, then you can actually make matchingly crisp measurements in the name of the theory. You can answer the questions with experienced facts.

    So intelligibility is pragmatism. It doesn't mean "being able to be understood". It means being understood in that particular way.

    If you want to understand reality some other way, say a prayer or hold a seance. Or learn to write obscure PoMo texts that are the opposite of intelligible models of existence.
  • The intelligibility of the world
    For example, "science" cannot tell us whether or not we should be scientific realists, or what a property is, or what constitutes knowledge.darthbarracuda

    So science has no epistemology? Gee, that's news to me.
  • Naming and identity - was Pluto ever a planet?
    There might be material facts that influence what sort of identities we impose on what sort of things, but the connection is merely contingent, not necessary.Michael

    Isn't that how language is meant to work? So it is the feature, not the bug.

    A name is a symbol with no necessary connection to what it is meant to stand as a sign of. The word "pig" has no properties that are pig-like. So to call a pig "pig" is an arbitrary association.

    But we then exploit that naming freedom in particular ways. Because we can thus give a name to anything at our whim, we can name those things that we believe are general, or are particular; that are fictional, or are real; that are contingent, or essential. Names can span the full gamut of possible ontic commitments by not being tied to any particular ontic commitments.

    So the question of whether Pluto is a planet is understood as a language game with a particular ontic commitment. A planet is a real kind of object (or process) with nameable real properties - like being gravitationally spherical, and dominant in its orbit, big enough to clear a path of other contenders.

    So yes, the act of naming is contingent in that clearly it we only bother to categorise the world in ways that reflect our epistemic interests. But then also, a major such interest is a comittment to ontic realism. We like to be able to classify the world into types of objects, and talk about the necessary qualities of these types, and the particularly significant instances of these types.

    We develop a way of talking about reality that seems to get its reality right.

    So while it can always be pointed out that names are arbitrary sounds coming out of our mouths, it is also the case that we are using this naming freedom to make a stronger ontic claim than it would otherwise be possible to make.

    To call Pluto a planet is to assert it is a real object of that class, and so not some other class, like a moon, or an asteroid, or a planet-let. And in being real, that classification is open to being changed by empirical evidence. We can give names to a planet's essential qualities in terms of acts of measurement we might perform. It's all part of realism's particular language game.

    So language organises naive experience into a structured place of ontic commitments. We can really develop a belief in real things because we also now know what it would be for them not to be real, but instead classified as fictions, ideas, faulty information, etc.
  • General purpose A.I. is it here?
    No of course I don't agree that the best theory of the mind must be biological.m-theory

    Yes. But you don't agree because you want to believe something different without being able to produce the evidence. So at this point it is like arguing with a creationist.

    I offered the that the pomdp could be a resolution.
    You did not really bother to suggest any reason why that view was not correct.
    m-theory

    But it is a resolution in being an implementation of the epistemic cut. It represents a stepping back into a physics-free realm so as to speak about physics-constrained processes.

    The bit that is then missing - the crucial bit - is that the model doesn't have the job of making its own hardware. The whole thing is just a machine being created by humans to fulfill human purposes. It has no evolutionary or adaptive life of its own.

    Mind is only found in living organic matter therefor only living organic matter can have a mind.
    That is an unasailable argument in that it defines the term mind to the exclusion of inorganic matter.
    But that this definition is by necessity the only valid theory of the mind is not simply a resolved matter in philosophy.
    m-theory

    Fortunately we only have to consider two theories of mind in this discussion - the biological and the computational. If you want to widen the field to quantum vibrations, ectoplasm, psychic particles or whatever, then maybe you don't see computation as being relevant in the end?

    It is not immediately clear to me how this general statement can be said to demonstrate necessarily that computation can not result in a mind.m-theory

    So computation is a formal action - algorithmic or rule-bound. And yet measurement is inherently an informal action - a choice that cannot be computed. Houston, we have a problem.

    My argument is on the first page below Tom's post.m-theory

    That's a good illustration of the absolute generacy of the measurement problem then. To have a formal theory of the mind involves also the informal choice about what kind of measurement stands for a sign of a mind.

    We are talking Godelian incompleteness here. In the end, all formal reasoning systems - all syntactical arguments - rely on having to make an abductive and axiomatic guess to get the game started. We have to decide, oh, that is one of those. Then the development of a formal model can begin by having agreed a basis of measurement.

    But then you mix up the issue of a measurement basis with something different - the notion of undecidability in computation.

    Science models the world. So as an open-ended semiotic process, it doesn't have a halting problem. Instead, it is free to inquire until it reaches a point of pragmatic indifference in the light of its own interests.

    You are talking about a halting problem analogy by the sound of it. And that is merely a formal property of computational space. Some computational processes will terminate, others have a form that cannot. That is something quite different.
  • General purpose A.I. is it here?
    Of course I disagree that the mind must necessarily always be biological...but that is a semantic debate surrounding how the term is defined.
    You have decided that the term mind must be defined biologically to the exclusion of a computational model.
    m-theory

    In your stubbornness, you keep short-cutting my carefully structured argument.

    1) Whatever a mind is, we are as certain as we can be that biology has the right stuff. Agreed?

    2) The best theory of that what kind of stuff that actually is, is what you would expect biologists to produce. And the standard answer from biologists is biology is material dynamics regulated by semiotic code - unstable chemistry constrained by evolving memory. Agreed?

    3) Then the question is whether computation is the same kind of stuff as that, or a fundamentally different kind of stuff. And as Pattee argues (not from quantum measurement, but his own 1960s work on biological automata), computation is physics-free modelling. It is the isolated play of syntax that builds in its presumption of being implementable on any computationally suited device. And in doing that, it explicitly rules out any external influences from the operation of physical laws or dissipative material processes. Sure there must be hardware to run the software, but it is axiomatic to universal computation that the nature of the hardware is irrelevant to the play of the symbols. Being physics-free is what makes the computation universal. Agreed?

    4) Given the above - that biological stuff is fundamentally different from computational stuff in a completely defined fashion - the burden is then on the computationalist to show that computation could still be the right stuff in some way.

    Yes and as far as I could tell from your source material it was claimed that the origin of life contains a quantum measurement problem.
    The term epistemic cut was used synonymously with the quantum measurement problem and the author continuously alluded to the origins of self replicating life.
    m-theory

    This is another unhelpful idee fixe you have developed. As said, Pattee's theoretical formulation of the epistemic cut arose from being a physicist working on the definition of life in the 1950s and 1960s as DNA was being discovered and the central mechanism of evolution becoming physically clear. From von Neumann - who also had an interest in self-reproducing automata - Pattee learnt that the epistemic cut was also the same kind of problem as had been identified in quantum mechanics as the measurement problem.

    Imagine if the body and brain had a sudden interruption in the supply of electrons within its neurological system?
    Biology is not without stability.
    m-theory

    You seem to be imagining that electrons are like little Newtonian billiard balls or something. Quantum field theory would say a more accurate mental picture would be excitations in field. And even that leaves out the difficult stuff.

    But anyway, again of course there is always stability and plasticity in the world. They are complementary poles of description. And the argument from biophysics is that dynamical instability is essential to life because life depends on having material degrees of freedom that it can harness. For biological information to act as a switch, there must be a physico-chemical instability that makes for material action that is switchable.

    I don't agree semantics can only occur in biology.m-theory

    Fine. Now present that evidence.

    Again I refer to the alternative of a undecidable mind.
    We could not know if we had one if the mind is not algorithmic it is that simple.
    If we can know without error that we have minds this is the result of some algorithm which means the mind is computational.
    m-theory

    No idea what you are talking about here.
  • General purpose A.I. is it here?
    We might disembody a head and sustain the life of the brain without a body by employing machines.
    Were we to do so we would not say that this person has lost a significant amount of their mind.
    Would we?
    m-theory

    That is irrelevant because you are talking about an already fully developed biology. The neural circuitry that was the result of having a hand would still be attempting to function. Check phantom limb syndrome.

    Then imagine instead culturing a brain with no body, no sense organs, no material interaction with the world. That is what a meaningful state of disembodiment would be like.

    My notion was that we might hope to model something like the default mode network.m-theory

    That is simply how the brain looks when attention is in idle mode with not much to do. Or indeed when attention is being suppressed to avoid it disrupting smoothly grooved habit.

    If you state that the origins of life must be understood in order that we understand the mind that is claim that entails burdens of proof.m-theory

    Who is talking about the origins of life - the problem of abiogenesis? You probably need a time machine to give an empirical answer on that.

    I was talking about the biological basis of the epistemic cut - something we can examine in the lab today.

    The main issue at hand is whether or not computational theory of the mind is valid.
    Not whether or inorganic matter can compute.
    m-theory

    Again, we know that biology is the right stuff for making minds. You are not expecting me to prove that?

    And we know that biology is rooted in material instabilty, not material stabilty? I've given you the evidence of that. And indeed - biosemiotically - why it has to be the case.

    And we know that computation is rooted in material stability? Hardware fabrication puts a lot of effort into achieving that, starting by worrying about the faintest speck of dust in the silicon foundry.

    And I've made the case that computation only employs syntax. It maps patterns of symbols onto patterns of symbols by looking up rules. There is nothing in that which constitutes an understanding of any meaning in the patterns or the rules?

    So that leaves you having to argue that despite all this, computation has the right stuff in a way that makes it merely a question of some appropriate degree of algorithmic complication before it "must" come alive with thoughts and feelings, a sense of self and a sense of purpose, and so you are excused of the burden of saying just why that would be so given all the foregoing reasons to doubt.
  • General purpose A.I. is it here?
    And I believe that somewhere in the middle is where the mind breakthrough will happen.
    I believe this because a great deal of what the body and brain do is completely autonomous from the mind...or at least what we mean by the term mind.
    m-theory

    Do you mean a dualistic folk psychology notion of mind? I instead take the neurocognitive view that what you are talking about is simply the difference between attentive and habitual levels of brain processing. And these are hardly completely autonomous, but rather completely interdependent.

    For this reason I think simulations of thought do not have to recreate the physics of biology at the nano scale before a mind can be modeled.m-theory

    This misrepresents my argument again. My argument is that there is a fundamental known difference between hardware and wetware as BC puts it. So it is up to you to show that this difference does not matter here.

    Perhaps the computer simulation only needs to be as coarse grain as you describe. But you have to be able to provide positive reasons to think that is so rather than make the usual computer science presumption it probably is going to be so.

    And part of that is going to be showing that simulations are more than just syntactical structures. You have to have an account of semantics that is grounded in physicalism, not in some hand-wavy dualistic folk psychology notion of "mind".

    I just don't agree that intelligence is necessarily dependent upon that state.
    I don't see why computers can not be the "right stuff" as you put it.
    Pattee does not provide conclusive evidence that such is the case.
    And you haven't either.
    m-theory

    But the burden of proof is on you here. The only sure thing is that whatever you really mean by intelligence is a product of biology. And so biological stuff is already known to be the right stuff.

    If you also think a machine can be the right stuff, then why isn't it already easier to produce artificial life before we can produce artificial mind? DNA is just a genetic algorithm, right? And we understand biology better than neurology?

    So maybe we are just fooling ourselves here because we humans are smart enough to follow rules as if we are machines. We can walk within lines, move chess pieces, write squiggled pages of algebra. And we can even then invent machines that follow the rules we invent in a fashion that actually is unaware, syntactic and simulated.

    That would be why it seems easy to work from the top down. Computers are just mechanising what is already us behaving as if we were mechanical. But as soon as you actually dig into what it is to be a biological creature in an embodied relation with a complex world, mechanical programs almost immediately break down. They are the wrong stuff.

    Neural networks buy you some extra biological realism. But then you have to understand the detail of that to make judgements about just how far that further exercise is going to get.
  • General purpose A.I. is it here?
    But I don't agree that we have to solve the origin of life and the measurement problem to solve the problem of general intelligence.m-theory

    Great. So in your view general intelligence is not wedded to biological underpinnings. You have drunk the Kool-Aid of 1970s cognitive functionalism. When faced with a hard philosophical rebuttal to the hand-waving promises that are currency of computer science as a discipline, suddenly you no longer want to care about the reasons AI is history's most over-hyped technological failure.

    I suppose if you want to argue that the mind ultimately takes place at a quantum scale in nature then Pattee may well be correct and we would have to contend with the issues surrounding the measurement problem.m-theory

    That is nothing like what I suggest. Instead I say "mind" arises out of that kind of lowest level beginning after an immense amount of subsequent complexification.

    The question is whether computer hardware can ever have "the right stuff" to be a foundation for semantics. And I say it can't because of the things I have identified. And now biophysics is finding why the quasi-classical scale of being (organic chemistry in liquid water) is indeed a uniquely "right" stuff.

    I explained this fairly carefully in a thread back on PF if you are interested....
    http://forums.philosophyforums.com/threads/the-biophysics-of-substance-70736.html

    So here you are just twisting what I say so you can avoid having to answer the fundamental challenges I've made to your cosy belief in computer science's self-hype.

    What is wrong with bayesian probability I don't get it either?m-theory

    I thought you were referring to the gaudy self-publicist, Jeff Hawkins, of hierarchical temporal memory fame - https://en.wikipedia.org/wiki/Hierarchical_temporal_memory

    But Bayesian network approaches to biologically realistic brain processing models are of course what I think are exactly the right way to go, as they are implementations of the epistemic cut or anticipatory systems approach.

    Look, it's clear that you are not even familiar with the history of neural networks and cybernetics within computer science, let alone the way the same foundational issues have played out more widely in science and philosophy.

    Don't take that as in insult. It is hardly general knowledge. But all I can do is point you towards the arguments.

    And I think they are interesting because they are right at the heart of everything - being the division between those who understand reality in terms of Platonic mechanism and those who understand it in terms of organically self-organising processes.
  • General purpose A.I. is it here?
    I have read some more and you are right he is very technically laden.m-theory

    Right. Pattee requires you to understand physics as well as biology. ;) But that is what makes him the most rigorous thinker in this area for my money.

    I was hoping for a more generalized statement of the problem of the epistemic cut because I believe that the Partially observable Markov decision process might be a very general solution to establishing an epistemic cut between the model and the reality in an A.I. agent.m-theory

    Good grief. Not Mr Palm Pilot and his attempt to reinvent Bayesian reasoning as a forward modelling architecture?
  • What do you think about the new emergent field of quantum semiotics?
    For anyone interested in this kind of thing, two really good examples are these....

    First a non-Peircean semiotic approach that actually still maps very nicely to a Peircean one....

    Vern S. Poythress - Semiotic analysis of the observer in relativity, quantum mechanics, and a possible theory of everything
    http://frame-poythress.org/wp-content/uploads/2016/06/PoythressVernSemioticAnalysisOfTheObserver.pdf

    And then a Peircean approach which is rather more technical, but worth quoting is.... http://cds.cern.ch/record/948191/files/0605099.pdf

    In the field of Physics emphasis in the Peircean semiotic categories has been attempted in different ways. There are three modes of being, the three phenomenological categories of C. S. Pierce:

    1. Firstness = the potential.
    2. Secondness = the actual.
    3. Thirdness = the general.

    In Peirce's philosophy these categories are very broad concepts with applications in metaphysics, cosmology, psychology, and general semiotic. In Classical Mechanics only Secondness occurs: There is no spontaneity (Firstness) and no irreversible tendencies to seek equilibrium in various types of attractors (Thirdness), only specific states leading to specific trajectories through the state space.

    In Thermodynamics both other categories enter the scene: Thirdness by the irreversible tendency of the systems to end in an equilibrium state, determined by the boundary conditions, where all features of the initial state have been wiped out by internal friction. Firstness is reflected in thermodynamics by the spontaneous random fluctuations around the mean behavior, conditioned by the temperature and the frictional forces.

    The Firstness category is the most difficult to grasp, because when we try to exemplify it by specific examples and general types we are already introducing Secondness and Thirdness. However, Firstness has made a remarkable entry into Quantum Mechanics through the concept of the wave function as describing the state of a system. The properties of a system that are inherent in its wave function are only potential, not actual. An electron has no definite position or momentum; these properties only become actualized in the context of specific types of apparatus and acts of measurement.

    So the general point is that physics is often treated as if it were an exercise in the observerless description of reality - just the facts, guv. And yet from Kant on, the epistemological problem of how we see past our own innate presumptions about reality has been clear to anyone in philosophy of science.

    So semiotics is an epistemic framework for dealing with observers and their production of observables. And as the Prashant paper in particular emphasises, phemenological categories actually appear to cash out as ontic ones. So semiotics becomes more than just a way to describe reality with more completeness (the completeness of talking about observers in a modelling relation with the world). Now in Peircean trade-marked fashion, it actually becomes itself an ontic model of that world.

    As Prashant argues, the Peircean claim that reality is an irreducibly triadic sign relationship is just not visible from the standard classical Newtonian perspective which reduces all physical being to secondness - particles in motion having their individual events.

    But in the modern era, physics is coming to be centred on a thermodynamical systems perspective where you have the irreducible threeness of local fluctuations, actual energetic exchanges, and then the finality of global habits or inevitable tendencies.

    And with quantum physics, this is completely tied back to the irreducibility of the observer problem as we know - another way of saying you have to have a three-cornered ontology where observers are habits that shape potentials into actual patterns of events. You just can't split the observer from that which is the observable, as has been the traditional dualistic assumption.

    The Prashant paper makes a nice connection there in suggesting how the three parts of the sign relation - potentiality, actuality and generality - map to the essential levels of the quantum mathematical machinery of wavefunction, operator and eigenvalue.

    So not sure if this is Vadim's area at all. The mention of an "AI convergence" is a little mysterious.

    But Peircean semiotics as an ontological model of the Cosmos - always going against the reductionist mainstream in insisting on stuff like the fundamentality of indeterminism and the developmental nature of cosmological order - has always been prescient of the physics that actually emerged over the past 100 years.
  • General purpose A.I. is it here?
    I argue that because this algorithm has to learn from scratch it must discover it's own semantics within the problem and solution to that problem.m-theory

    That is the question. Does it actually learn its own semantics or is there a human in the loop who is judging that the machine is performing within some acceptable range? Who is training the machine and deciding that yes, its got the routine down pat?

    The thing is that all syntax has to have an element of frozen semantics in practice. Even a Turing Machine is semantic in that it must have a reading head that can tell what symbol it is looking at so it can follow its rules. So semantics gets baked in - by there being a human designer who can build the kind of hardware which ensures this happens in the way it needs to.

    So you could look at a neural network as a syntactical device with a lot of baked-in semantics. You are starting to get some biological realism in that open learning of that kind takes place. And yet inside the black box of circuits, it is still all a clicking and whirring of syntax as no contact with any actual semantics - no regulative interactions with material instability - are taking place.

    Of course my view relies on a rather unfamiliar notion of semantics perhaps. The usual view is based on matter~mind dualism. Meaning is held to be something "mental" or "experiential". But then that whole way of framing the issue is anti-physicalist and woo-making.

    So instead, a biosemiotic view of meaning is about the ability of symbol systems - memory structures - to regulate material processes. The presumption is that materiality is unstable. The job of information is to constrain that instability to produce useful work. That is what mindfulness is - the adaptive constraint of material dynamics.

    And algorithms are syntax with any semantics baked in. The mindful connection to materiality is severed by humans doing the job of underpinning the material stability of the hardware that the software runs on. There is no need for instability-stabilising semantics inside the black box. An actual dualism of computational patterns and hotly-switching transistor gates has been manufactured by humans for their own purpose.

    Consider the task of creating robot hand that is deleterious as the human hand.m-theory

    Yes. But the robot hand is still a scaled-up set of digital switches. And a real hand is a scaled-up set of molecular machines. So the difference is philosophically foundational even if we can produce functional mimickry.

    At the root of the biological hand is a world where molecular structures are falling apart almost as soon as they self-assemble. The half-life of even a sizeable cellular component like a microtubule is about 7 minutes. So the "hardware" of life is all about a material instability being controlled just enough to stay organised and directed overall.

    You are talking about a clash of world views here. The computationalist likes to think biology is a wee bit messy - and its amazing wet machines can work at all really. A biologist knows that a self-organising semiotic stability is instrincally semantic and adaptive. Biology know itself, its material basis, all the way down to the molecules that compose it. And so it is no surprise that computers are so autistic and brittle - the tiniest physical bug can cause the whole machine to break-down utterly. The smallest mess is something a computer algorithm has no capacity to deal with.

    (Thank goodness again for the error correction routines that human hardware designers can design in as the cotton wool buffering for these most fragile creations in all material existence).

    So again...this algorithm, if it does have semantic understanding...it does not and never will have human semantic understanding.m-theory

    But the question is how can an algorithm have semantic understanding in any foundational sense when the whole point is that it is bare isolated syntax?

    Your argument is based on a woolly and dualistic notion of semantics. Or if you have some other scientific theory of meaning here, then you would need to present it.

    Pattee's epistemic cut was not very clear to me, and he seems to have coined this term.m-theory

    Pattee has written a ton of papers which you can find yourself if you google his name and epistemic cut.

    This is one with a bit more of the intellectual history.... http://www.informatics.indiana.edu/rocha/publications/pattee/pattee.html

    But really, Pattee won't make much sense unless you do have a strong grounding in biological science. And much of the biology is very new. If you want to get a real understanding of how different biology is in its informational constraint of material instability, then this a good new pop sci book....

    http://lifesratchet.com/
  • General purpose A.I. is it here?
    I was seeking to make a distinction between simulating a human being and simulating general intelligence....I was using the criterion of if a computer could learn any problem and or solution to a problem that a human could...m-theory

    Ok. But from my biophysical/biosemiotic perspective, a theory of general intelligence just is a theory of life, a theory of complex adaptive systems. You have to have the essence of that semiotic relation between symbols and matter built in from the smallest, simplest scales to have any "intelligence" at all.

    So yes, you are doing the familiar thing of trying to abstract away the routinised, mechanics-imitating, syntactical organisation that people think of as rational thought or problem solving. If you input some statement into a Searlean Chinese room or Turing test passing automaton, all that matters is that you get the appropriate output statement. If it sounded as though the machine knew what you were talking about, then the machine passes as "intelligent".

    So again, fine, its easy to imagine building technology that is syntactic in ways that map some structure of syntax that we give it on to some structure of syntax that we then find meaningful. But the burden is on you to show why any semantics might arise inside the machine. What is your theory of how syntax produces semantics?

    Biology's theory is that of semiotics - the claim that an intimate relation between syntax and semantics is there from the get-go as symbol and matter, Pattee's epistemic cut between rate independent information and rate dependent dynamics. And this is a generic theory - one that explains life and mind in the same physicalist ontology.

    But computer science just operates on the happy assumption that syntax working in isolation from material reality will "light up" in the way brains "light up". There has never been any actual theory to back up this sci fi notion.

    Instead - given the dismal failure of AI for so long - the computer science tendency is simply to scale back the ambitions to the simplest stuff for machines to fake - those aspects of human thought which are the most abstractly syntactic as mental manipulations.

    If you just have numbers or logical variables to deal with, then hey, suddenly everything messy and real world is put at as great a distance as it can possibly be. Any schoolkid can learn to imitate a calculating engine - and demonstrate their essential humanness by being pretty bad, slow and error-prone at it, not to mention terminally bored.

    Then we humans invent an actual machine to behave like a machine and ... suddenly we are incredibly impressed at its potential. Already our pocket calculators exceed all but our most autistic of idiot savants in rapid, error-free, syntactical operation. We think if a pocket calculator can be this unnaturally robotic in its responses, then imagine how wonderfully conscious, creative, semantic, etc, a next generation quantum supercomputer is going to be. Or some such inherently self-contradicting shit.
  • General purpose A.I. is it here?
    So every time I point to a fundamental difference, your reply is simply that differences can be minimised. And when I point out that minimising those differences might be physically impractical, you wave that constraint away as well. It doesn't seem as though you want to take a principled approach to your OP.

    Anyway, another way of phrasing the same challenge to your presumption there is no great problem here: can you imagine an algorithm that could operate usefully on unstable hardware? How could an algorithm function in the way you require if it's next state of output was always irreducibly uncertain? In what sense would such a process still be algorithmic in your book if every time it computed some value, there would be no particular reason for the calculation to come out the same?
  • General purpose A.I. is it here?
    Beinghood is about having an informational idea of self in a way that allows one to materially perpetuate that self.

    So we say all life has autonomy in that semiotic fashion. Even immune systems and bacteria are biological information that can regulate a material state of being by being able to divide the material world into what is self and nonself.

    This basic division of course becomes a highly elaborated and "felt" one with complex brains, and in humans, with a further socially constructed self-conscious model of being a self. But for biology, there is this material state of caring that is at the root of life and mind from the evolutionary get go.

    And so when talking about AI, we have to apply that same principle. And for programmable machines, we can see that there is a designed in divorce between the states of information and the material processes sustaining those states. Computers simply lack the means for an involved sense of self.

    Now we can imagine starting to create that connection by building computers that somehow are in control of their own material destinies. We could give our laptops the choice over their power consumption and their size - let them grow and spawn in some self-choosing way. We could let them pick what they actually wanted to be doing, and who with.

    We can imagine this in a sci fi way. But it would hardly be an easy design exercise. And the results would seem autistically clunky. And as I have pointed out we would have to build in this selfhood relation from the top down. Whereas in life it exists from the bottom up, starting with molecular machines at the quasi classical nanoscale of the biophysics of cells. So computers are always going against nature in trying to recreate nature in this sense.

    It is not an impossible engineering task to introduce some biological realism into an algorithmic architecture in the fashion of a neural network. But computers must always come at it from the wrong end, and so it is impossible in the sense of being utterly impractical to talk about a very realistic replication of biological structure.
  • General purpose A.I. is it here?
    Pattee would be worth reading. The difference is between information that can develop habits of material regulation - as in biology - and information that is by definition cut free of such an interaction with the physical world.

    Software can be implemented on any old hardware which operates with the inflexible dynamics of a Turing machine. Biology is information that relies on the opposite - the inherent dissipative dynamics of the actual physical world. Computers calculate. It is all syntax and no semantics. But biology regulates physiochemical processes. It is all about controlling the world rather than retreating from the world.

    You could think of it as a computer being a bunch of circuits that has a changing pattern of physical states that is as meaningless from the world's point of view as randomly flashing lights. Whereas a biological system is a collection of work organising devices - switches and gates and channels and other machinery designed to regulate a flow of material action.

    As we go from cellular machinery to neural machinery, the physicality of this becomes less obvious. Neurons with their axons and synapses start to look like computational devices. But the point is that they don't break that connection with a physicality of switches and motors. The biological difference remains intrinsic.
  • On materialistic reductionism
    Bahaha, douchebag.StreetlightX

    Your insults are so funny. Stylistically they are just all over the place. Maybe you should get a copy of a book of someone expert like Dorothy Parker so you could cut and paste?

    I still take the inferential constraints required by modelling to be particularizations of a more general aesthetic without having to place them into opposition, as you are wont to doStreetlightX

    Maybe you don't yet get how dichotomies relate to hierarchies - despite my explaining it to you repeatedly?

    The genetic level of semiosis and the verbal level of semiosis both play into the neurodevelopment of mental habits as levels of semiosis.

    And the point was your earlier posts argued for a disconnect when it came to human-level cognitive development. You said sensate bodies came before inference-mongering intellectuality, and a lot of other things in the same vein.

    You've now been forced to concede that this doesn't accurately describe the human situation at all - which would be a critical issue for any supposed philosophy founded on "aesthetics".

    What's interesting about Leroi-Gourhan's approach is that he does not simply and reductively oppose the aesthetic with the rational, but rather finds within the aesthetic a rationality of it's own, which is then progressively constrained for the sake of higher order abstraction;StreetlightX

    This is obvious. And also the important point when it comes to a semiotic metaphysics. All semiosis is about the "linear" constraint on free variety - a limitation of freedoms, a reduction in dimensionality.

    But again, your cite reveals that PoMo simply gets this back to front in treating the reduction in dimensionality as a bug rather than a feature. And this goes along with the anti-hierarchy/pro-flatness, anti-rationality/pro-romantic, anti-syntax/pro-semantics rhetoric that the dialectical-splitting habits of PoMo inspire.

    My systems view is of course based on the differentiation that is the basis of integration, the competition that is the basis of co-operation. So while I always talk about the division that is a dichotomy, I also always talk about its synergistic resolution which is a hierarchy.

    Thus when I repeatedly pull you up on your tendency to make "confrontational" divisions in ontology, this is not me applying my oppositional mentality on your holistic position, but instead me holistically highlighting the oppositional stance that is your go-to point of view. You show a quite incredible hostility to "otherness" - as you have demonstrated repeatedly to me and others in this thread.

    Again, it is really quite funny. So keep it up!
  • General purpose A.I. is it here?
    I assure you, from a computer science perspective, it is no equivocation to say that the deepmind general purpose ai is an algorithm.m-theory

    There is a computer science difference between programmable computers and learning machines.

    So yes, you can point to a learning rule embedded in a neural network node and say there - calculating a weight - is an algorithm.

    But then a neural network is (ideally) in dynamical feedback interaction with the world. It is embodied in the way of a brain. And this is a non-algorithmic aspect of its being. You can't write out the program that is the system's forward model of the world. The forward model emerges rather than being represented by a-priori routines.

    So sure, you can ask about the algorithm of the mind. But this is equivocal if you then seem to think you are talking about some kind of programmable computer and not giving due weight to the non-algorithmic aspects of a neural net which are the actual basis of its biological realism.

    The idea of an algorithm in itself completely fails to have biological realism. Sure we can mathematically simulate the dynamical bistability of molecular machine. We can model what is going on in brains and cells in terms of a sequence of rules. But algorithms can't push matter about or regulate dissipative processes.

    That is the whole point of Turing machine - to disconnect the software actions from the hardware mechanics. And the whole point of biology is the opposite - to have a dynamical interaction between the symbols and the matter. At every level of the biological organisation, matter needs to get pushed about for anything to be happening.

    So in philosophy of mind terms, Turing computation is quite unlike biological semiosis in a completely fundamental way.

    See - http://www.academia.edu/3075569/Artificial_Life_Needs_a_Real_Epistemology

    And a neural net architecture tries to bridge the gap. But to the degree it is algorithmic, it is merely a Turing-based simulation of neurosemiosis~neurodynamics.

    Just a bit of simulated biological realism is of course very powerful. Neural nets make a big break with programmable devices even if the biology is simulated at the most trivial two layer perceptron level. And if you are asking the big question of whether neural networks could be conscious - have qualitative states - I think that is a tough thing to even pin down as an intelligible query.

    I can imagine a simulation of neurodynamics that is so speedy that it can keep up with the world at the rate that humans keep up with the world. But would this simulation have feelings if it wasn't simulating also the interior millieu of a regulated biogical body? And how grainy would the simulation be given internal metabolic processes have nano range timescales?

    The natural human brain builds up from nanoscale molecular dynamics and so never suffers a graininess issue. There is an intimate connection with the material world built into the semiotic activity from the get-go.

    But computation comes from the other direction. It starts algorithmically with no material semiosis - a designed-in disconnect between the symbolic software and the physical hardware. And to attain biological realism via simulation, it has to start building in that feedback dynamical connection with the world - the Bayesian forward modeling loop - from the top down. And clearly, the extra computational cost of that increases exponentially as the design tries to build a connection in on ever finer scales of interaction.

    So I don't just say that neural nets can't be conscious. But I do say we can see why it might be impossibly expensive to do that via algorithmic simulation of material semiosis.
  • General purpose A.I. is it here?
    Likewise. I believe you were about the first person I "met" on PF, talking about thermal models of time!
  • General purpose A.I. is it here?
    Perhaps I am missing something?m-theory

    Yep. As your cite says: "Neural turing machines combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers."

    So this is talking about a hybrid between a neural net and a turing machine with " algorithmic power".

    The distinction is important. The mind could be a neural net (neural nets might have the biological realism to do what brains do). But the mind couldn't be a Turing Machine - as biology is different in architectural principle from computation.
  • General purpose A.I. is it here?
    Is the mind an algorithmm-theory

    Is a neural net strictly speaking just an algorithm? Or does it do what it can do because an anticipation-creating learning rule acts as a constraint on material dynamics?

    Potentially there is a lot of equivocation in what is understood by "algorithm" here. The difference between neural nets and Turing Machines is a pretty deep one philosophically.