• TheMadFool
    13.8k
    What's the problem with referents? The clear liquid that flows in rivers and the oceans that at times becomes solid and cold, and at other times is invisible vapor is the referent of the word "water".
  • A Raybould
    86
    Computers are symbol manipulators and that means whatever can be symbolized, is within the reach of a computer.TheMadFool

    "Within the reach" avoids precision where precision is needed. What do you mean, here?


    If you believe there's more to understanding than symbol manipulation...TheMadFool

    Whether it is symbol manipulation is beside the point. What's at issue here is my statement that "[Turing-like] tests could be further improved by focusing on what a machine understands, rather than what it knows" and your reply that you don't see anything special in understanding. Being symbol manipulation does not automatically make it simple, and your explanations, which invoke symbol manipulation whithout showing what sort of manipulation, are just part of the reason for thinking that it is not.


    If you believe there's more to understanding than symbol manipulation...TheMadFool

    That is a view I have not stated and do not hold. My position has consistenly been that understanding is not a simple issue and that it remains a significant obstacle for AI to overcome. I have also taken the position that current logic solvers are not sufficient to give a machine the ability to understand the world, which should not be mistaken for a claim that no form of symbol manipulation could work. To be clear, my position on consciousness is that I suppose that a digital computer could simulate a brain, and if it did, it would have a mind like that of the brain being simulated.


    Understanding is simply a match-the-following exercise, something a computer can easily accomplish.TheMadFool

    Please expand on 'match-the-following', as I cannot imagine any interpretation of that phrase that would lead to a computer being able to understand anything to the point where it would perform reasonably well on "common-sense physics" problems (in fact, perhaps you could work through the "if you put cheese in a refrigerator, will it melt?" example?)


    How do you think machine translations work?TheMadFool

    You have taken hold of the wrong end of the stick here. I was replying to your question "Is there a word with a referent that's impossible to be translated into computer-speak?" by pointing out that it is irrelevant to the issue because translation does not create or add meaning. In turn, it is also irrelevant to this point whether the translation is done by humans or machines: neither of them create or add meaning, which is delivered in the original text.


    Ask yourself the question: what exactly does understanding semantics mean if a machine, allegedly incapable of semantics, can do as good a job as a translator of languages?TheMadFool

    You are barking up the wrong tree here, precisely because translation does not modify the semantics of its input. To address the matter at hand, you would need an example that does demand understanding. We have that, at least to a small extent, in the common-sense physics questions discussed in the paper I linked to, and even here the performance of current AI is weak (note that this is not my assessment or that of critics; it is from the team which developed the program.) You have avoided addressing this empirical evidence against your claim that machine understanding is simple, until...

    I've heard that even the very best expert can and do make mistakes.TheMadFool

    Really? Do you understand that, for this excuse to work, in would not take just one or two experts to make a few mistakes; it would require the entire community to be mistaken all the time, never seeing the simple solution that you claim to have but have not yet explained?


    Kindly furnish the definitions of "conceivable" and "possible". I'd like to see how they differ, if you don't mind.TheMadFool

    At last! Back to the main issue. I will start by quoting what I wrote previously:

    In everyday usage, the words might sometimes be considered synonymous, but in the contexts of metaphysics and modal logic, which are the contexts in which the p-zombie argument is made, 'possible' has a specific and distinct meaning. As the example shows, there's an aspect of 'so far as we know' to conceptions, which is supposed to be resolved in moving to metaphysical possibility - when we say, in the context of modal logic, that there is a possible world in which X is true, we are not just saying that we suppose there might be such a possible world. We should either assert it as an axiom or deduce it from our axioms, and if Chalmers had done the former, physicalists would simply have said that the burden was on him to justify that belief (it gets a bit more complicated when we make a counterfactual premise in order to prove by contradiction, but that is not an issue here.)

    If the two words were synonymous, Chalmers would not have spent any time in making the distinction and in attempting to get from the former to the latter, and his opponents would not be challenging that step.
    A Raybould

    To expand on that, one could hold that any sentence in propositional form is conceivable, as it is conceived of merely by being expressed (some people might exclude propositions that are false a priori, but a difficulty with that is that we don't always (or even often) know whether that is the case.)

    In the context of modal arguments, of which the p-zombie argument is one, for the sentence to be possible, it must be true in a possible world. In modal logic, if you want a claim that something is possible to be accepted by other people, you either have to get them to accept it as an axiom, or you must derive it from axioms they have accepted.

    I am not sure if the above is going to help, because the debate over whether Chalmers can go from conceivability to possibility is, in part, a debate over what, exactly, people have accepted when they accept the conceivability of p-zombies. What seems clear, however, is that neither side is prepared to say that they are the same.

    By the way, your positions seem to generally physicalist, except that you are troubled by p-zombies, which are intended to be anti-physicalist. AFAIK, this is quite an unusual combination of views.
  • bongo fury
    1.6k
    What's the problem with referents?TheMadFool

    Whether they are things out in the world, or merely more words referring to those things.

    The clear liquid that flows in rivers and the oceans that at times becomes solid and cold, and at other times is invisible vapor is the referent of the word "water".TheMadFool

    Yep. So what is it that a computer so easily (according to you) links to the word "water"? The referent you just described, or merely the description?
  • TheMadFool
    13.8k
    Yep. So what is it that a computer so easily (according to you) links to the word "water"? The referent you just described, or merely the description?bongo fury

    The description consists of referents.
  • TheMadFool
    13.8k
    "Within the reach" avoids precision where precision is needed. What do you mean, here?A Raybould

    While the initial association of symbols may require human input, once the work is complete, a computer can use the databank just as humans do.

    What's at issue here is my statement that "[Turing-like] tests could be further improved by focusing on what a machine understands, rather than what it knows" and your reply that you don't see anything special in understanding. Being symbol manipulation does not automatically make it simple, and your explanations, which invoke symbol manipulation whithout showing what sort of manipulation, are just part of the reason for thinking that it is not.A Raybould

    You insist that human understanding is not something a computer can do but what's the argument that backs this claim? I'd like to see an argument if it's all the same to you.

    You are barking up the wrong tree here, precisely because translation does not modify the semantics of its input.A Raybould

    What I'm saying is very simple. Semantics is nothing more than associating words with something else - be it a concrete as a stone or as abstract as calculus. Associating two things is easily done by a computer and ergo, in my humble opinion, semantics can be handled by a computer.

    To back up my position, I'll ask a few questions:

    1. When you read the word "water" and understand it, what's going on in your head?

    By the way, you haven't furnished the definitions for "conceivable" and "possible".
  • bongo fury
    1.6k
    Yep. So what is it that a computer so easily (according to you) links to the word "water"? The referent you just described, or merely the description?
    — bongo fury

    The description consists of referents.
    TheMadFool

    Ok, well to see "why people make such a big deal of understanding" you need to see that they are interested in how we link the word "water" to the water itself, and not merely to more words for water.

    "Referent" usually refers to the designated object itself, not to other words, semantically related or not.
  • TheMadFool
    13.8k
    Ok, well to see "why people make such a big deal of understanding" you need to see that they are interested in how we link the word "water" to the water itself, and not merely to more words for water.

    "Referent" usually refers to the designated object itself, not to other words, semantically related or not.
    bongo fury

    How do we do it, link the word "water" to the water itself, in your opinion?
  • bongo fury
    1.6k
    How do we do it, link the word "water" to the water itself, in your opinion?TheMadFool

    By learning to agree (or disagree) with other people that particular tokens of the word are pointed at particular instances of the object.
  • TheMadFool
    13.8k
    By learning to agree (or disagree) with other people that particular tokens of the word are pointed at particular instances of the object.bongo fury

    That's to say there is no meaning except in the sense of a consensus. What makes you think computers can't do that? Can't one computer use the same word-referent associations as another?
  • bongo fury
    1.6k
    That's to say there is no meaning except in the sense of a consensus.TheMadFool

    If you like. Is that an objection?

    What makes you think computers can't do that?TheMadFool

    What, agree and disagree about where each other's words have 'landed', out in the world? If by computers you mean some future AI, then sure. This would no doubt be a few steps more advanced than, say, being able to predict where each other's ball has (actually) landed. Which I assume is challenging enough for current robots.
  • A Raybould
    86

    I am replacing my original reply because I do not think the nitpicking style that this conversation has fallen into is helpful.

    From your explanations and 'water' question in your latest reply, it seems increasingly clear to me that we have very different ideas of what understanding is. For you, it seems to be something such that, if a person memorized a dictionary, they would understand everything that is defined in it. For me, it is partly an ability to find the significant, implicit connections between the things you know, and there is also a counterfactual aspect to it: seeing the consequences if things were different, and seeing what needs to change in order to get a desired result.

    Given these different conceptions, it is not surprising that you might think it is an easy problem, while I see significant difficulties. I will not repeat those difficulties here, as I have already covered them in previous posts.

    As for what I believe, no extant computer(+program) can perform human-like understanding, but I expect some future computer could do so.

    With regard to conceivability versus possibility, I gave my working definitions in my previous post, though not spelled out in 'dictionary style.' For completeness, here are the stripped-down versions:

    Conceivable: Anything that can be stated as a proposition is conceivable.

    Possible: In the context of modal logic, which is the context of Chalmers' argument, something is possible if and only if it can be stated as a proposition that is true in some possible world.

    I do not think reducing them to bare definitions is very helpful, and by doing so, perhaps I can persuade you of that. I urge you to take another look at the Collatz conjecture example from before.
  • TheMadFool
    13.8k
    Given these different conceptions, it is not surprising that you might think it is an easy problem, while I see significant difficultiesA Raybould

    I agree. For your kind information, I suspect my idea of understanding is simpler than yours; that probably explains why I feel computers are capable of it.

    What exactly does understanding mean for you?

    Let me illustrate what understanding means (for me):

    1. "Trees need water" consists of three words viz. "trees", "need" "water". It's clear that all three of them have referents. "Trees" and "water" being concrete objects, they maybe easy for a computer to make the connection between these words and their referents. The referent for "water" will consist of the sensory data that are associated with water and to that we can add scientific knowledge that water is H20, has Hydrogen bonds, etc.

    The same can be done with "trees". To cut to the chase, understanding the words "trees" and "water" is simply a process of connecting a specific set of sensory and mental data to these words.

    Coming to the word "need" we immediately recognize that the word refers to a concept i.e. the word has an abstract referent. The concept of need is a pattern abstracted from instances such as the relationship between plants and water, animals and oxygen, fire and heat, cars and gasoline, etc. In other words, if computers can be programmed to seek patterns the way humans can then, even abstract concepts, ergo abstract referents, are both within the reach and grasp of computers so programmed.

    In summary, understanding is about 1) associating sets of [sensory & mental] data (sensory as in through the sense organs and mental as in knowledge within a given paradigm) and 2) pattern detection.

    Which of the two aspects of understanding delineated above are difficult or impossible for a computer in your opinion?

    What's your definition of understanding?

    For me, it is partly an ability to find the significant, implicit connections between the things you know, and there is also a counterfactual aspect to it: seeing the consequences if things were different, and seeing what needs to change in order to get a desired result.A Raybould

    I covered this above.

    Conceivable: Anything that can be stated as a proposition is conceivable.

    Possible: In the context of modal logic, which is the context of Chalmers' argument, something is possible if and only if it can be stated as a proposition that is true in some possible world.
    A Raybould


    Thanks but I have an issue with your definition of "conceivable". According to you, unlike possibility, conceivability has no logical significance at all. When I say, "p-zombies are conceivable" I'm making the claim about p-zombies. That I can say that sentence ("can be stated") is the least of my concerns. Your definition of "conceivable" falls short of being relevant to the issue of whether p-zombies are possible/conceivable. Thanks anyway.
  • TheMadFool
    13.8k
    What, agree and disagree about where each other's words have 'landed', out in the world? If by computers you mean some future AI, then sure. This would no doubt be a few steps more advanced than, say, being able to predict where each other's ball has (actually) landed. Which I assume is challenging enough for current robots.bongo fury

    Do you mean that human understanding is reducible to computer logic but that we haven't the technology to make it work? If yes then that means you agree with me in principle that human understanding isn't something special, something that can't be handled by logic gates inside computers.
  • bongo fury
    1.6k
    Do you mean that human understanding is reducible to computer logicTheMadFool

    Only in the almost trivial sense that neurons are quite evidently some kind of switch or trigger.

    but that we haven't the technology to make it work? If yes then that means you agree with me in principle that human understanding isn't something special, something that can't be handled by logic gates inside computers.TheMadFool

    I roughly agree with you now (maybe, or maybe the switches will have to be actual neurons; we don't yet know), since you're talking about way off in the future.

    But do you at last see the trouble here,

    Searle's argument doesn't stand up to careful scrutiny for the simple reason that semantics are simply acts of linking words to their referents. Just consider the sentence, "dogs eat meat". The semantic part of this sentence consists of matching the words "dog" with a particular animal, "eat" with an act, and "meat" with flesh, i.e. to their referents and that's it, nothing more, nothing less. Understanding is simply a match-the-following exercise, something a computer can easily accomplish.TheMadFool

    ?
  • Harry Hindu
    4.9k
    Searle says that syntax can not give rise to semantics, and claims this to be the lesson of his "Chinese Room" paper. I don't agree, but I don't see the relationship as simple, either.A Raybould

    The problem with Searle's Chinese room is that the man in the room does understand something - the rules he is following - write this symbol when you see this symbol. It's just not the same rules that Chinese speaking people follow when using those same symbols.

    The meaning of symbols can be arbitrary. Just look at all the different words from different languages that refer to the same thing. When we aren't using the same rules for the same symbols it can appear as if one of us isn't understanding the symbols.

    That's what understanding is - having a set of rules for interpretting symbols. In this sense, computers understand thanks to their programming (a set of rules).
  • TheMadFool
    13.8k
    Only in the almost trivial sense that neurons are quite evidently some kind of switch or trigger.bongo fury

    What would a non-trivial sense look like? I mean your beliefs on human understanding seems to me as something approaching, oddly and also intriguingly, incomprehensibility for/by humans themselves. Unless, ofcourse, you have something to say about it...what's your take on understanding?

    But do you at last see the trouble herebongo fury

    To make it short, no.
  • fishfry
    2.6k
    If you want to read Chalmers' own words, he has written a book and a series of papers on the issue. As you did not bother to read my original link, I will not take the time to look up these references; you can find them yourself easily enough if you want to (and they may well be found in that linked article). I will warn you that you will find the papers easier to follow if you start by first reading the reference I gave you.A Raybould

    You're right, will do as the inclination strikes.


    That is a different question than the one you asked, and I replied to, earlier. The answer to this one is that a TM is always distinguishable from a human, because neither a human, nor just its brain, nor any other part of it, is a TM. A human mind can implement a TM, to a degree, by simulation (thinking through the steps and remembering the state), but this is beside the point here.[A Raybould

    Oh my. That's not true. But first for the record let me say that I agree with you. A TM could perhaps convincingly emulate but never implement an actual human mind. I don't believe the mind is a TM.

    But many smart people disagree. You have all the deep thinkers who believe the entire universe is a "simulation," by which they mean a program running on some kind of big computer in the sky. (Why do these hip speculations always sound so much like ancient religion?) We have many people these days talking about how AI will achieve consciousness and that specifically, the human mind IS a TM. I happen to believe they're all wrong, but many hold that opinion these days. Truth is nobody knows for sure.

    I've read many arguments saying that minds (and even the entire universe) are TMs. Computations. I don't agree, but I can't pretend all these learned opinions aren't out there. Bostrom and all these other likeminded characters. By the way I think Bostrom was originally trolling and is probably surprised that so many people are taking his idea seriously.

    If you had actually intended to ask "...indistinguishable from a human when interrogated over a teletype" (or by texting), that would be missing the point that p-zombies are supposed to be physically indistinguishable from humans (see the first paragraph in their Wikipedia entry), even when examined in the most thorough and intrusive way possible. This is a key element in Chalmers' argument against metaphysical physicalism.A Raybould

    I'm perfectly willing to stipulate that a p-zombie is physically indistinguishable. I made the assumption, which might be wrong, that their impetus or mechanism of action is driven by a computation. That is, they're totally human-like androids run by a computer program.

    If you are saying the idea is that they're totally lifelike and they have behavior but the behavior is not programmed ... then I must say I don't understand how such a thing could exist, even in a thought experiment. Maybe I should go read Chalmers.

    As a p-zombie is physically identical to a human (or a human brain, if we agree that no other organ is relevant), then it is made of cells that work in a very non-Turing, non-digital way.A Raybould

    You and I would both like to believe that. But neither of us has evidence that the mind is not a TM, nor do we have hours in the day to fight off the legion of contemporary intellectuals arguing that it is.

    Roger Penrose is famous for arguing that the mind is not a computation, and that it has something to do with Gödel's incompleteness theorem being solved in the microtubules. Nobody takes the idea seriously except as a point of interest. Sir Roger's bad ideas are better than most people's good ones.

    But you can't be unaware that many smart people argue that the mind is a TM.

    We don't know of anything that works in a "non-Turing, non digital way." There are mathematical models of hypercomputation or supercomputation in which one assumes capabilities beyond TMs. But there's no physics to go along with it. Nobody's ever seen hypercomputation in the physical world and the burden would be on you (and I) to demonstrate such.


    Chalmers believes he can show that there is a possible world identical to ours other than it being inhabited by p-zombies rather than humans, and therefore that the metaphysical doctrine of physicalism - that everything must necessarily be a manifestation of something physical - is false.A Raybould

    I recall this argument. It's wrong. If our minds are a logical or necessary consequence of our physical configuration, and a p-zombie is identical to a human, then the p-zombie must be self-aware.

    Otherwise there is some "secret sauce" that implements consciousness; something that goes beyond the physical. Some argument for physicalism. You just refuted it. Maybe I"m misunderstanding the argument. But if the mind is physical and a p-zombie is physically identical, then a p-zombie has a mind. If a p-zombie is physically identical yet has no mind, then mind is NOT physical. Isn't that right?

    Notice that there is no mention of AI or Turing machines here.A Raybould

    Only, in my opinion, because not every philosopher understands the theory of computation.

    What animates the p-zombie? If it's a mindless machine, it must have either a program, or it must have some noncomputable secret sauce. If the latter, the discovery of such a mechanism would be the greatest scientific revolution of all time. If Chalmers is ignoring this, I can't speak for him nor am I qualified to comment on his work.


    P-zombies only enter the AI debate through additional speculation: If p-zombies are possible, then it is also possible that any machine (Turing or otherwise), no matter how much it might seem to be emulating a human, is at most emulating a p-zombie.A Raybould

    As I mentioned earlier, it's entirely possible that my next door neighbor is only emulating a human. We can never have first-hand knowledge of anyone else's subjective mental states.

    I still want to understand what is claimed to animate a p-zombie. Is it a computation? Or something extra-computational? And if it's the latter, physics knows of no such mechanism.


    As the concept of p-zombies is carefully constructed so as to be beyond scientific examination,
    p/quote]

    Ah. Perhaps that explains my unease with the concept. My understanding is that p-zombies are logically incoherent. They are identical to human enough to emulate all human behavior, but they don't implement a subjective mind. In which case, mind must be extra-computable. Penrose's idea. I tend to agree that the mind is not computable. But how do p-zombies relate?
    A Raybould
    such a claim may be impossible to disprove, but it is as vulerable to Occam's razor as is any hypothesis invoking magic or the supernatural.A Raybould

    I think you've motivated me to at least go see what Chalmers has to say on the subject. Maybe I'll learn something. I can see that there must be more to the p-zombie argument than I'm aware of.

    ps -- I just started reading and came to this: "It seems that if zombies really are possible, then physicalism is false and some kind of dualism is true."

    https://plato.stanford.edu/entries/zombies/

    This tells me that my thinking is on the right track. If we are physical and p-zombies are physically identical then p-zombies are self-aware. If they aren't, then humans must have some quality or secret sauce that is non-physical.

    I would raise an intermediate possibility. The mechanism of mind might be physical but not computable. So we have three levels, not two as posited by the p-zombie theorists:

    * Mind is entirely physical.

    * Mind is entirely physical but not necessarily computable, in the sense of Turing 1936. It might be some sort of hypercomputation as is studied by theorists.

    * Mind might be non-physical. In which case we are in the realm of spirits and very little can be said.


    pps --

    AHA!!

    " Proponents of the argument, such as philosopher David Chalmers, argue that since a zombie is defined as physiologically indistinguishable from human beings, even its logical possibility would be a sound refutation of physicalism, because it would establish the existence of conscious experience as a further fact."

    https://en.wikipedia.org/wiki/Philosophical_zombie

    This is exactly what I'm understanding. And I agree that I probably made things too complicated by inserting computability in there. The p-zombie argument is actually much simpler.

    Well that counts as research for me these days. A couple of paragraphs of SEP and a quote-mine from Wiki. Such is the state of contemporary epistemology. If it's on Twitter it's true.
  • A Raybould
    86

    There is no point in discussing your own private definition of 'understanding' - no-one can seriously doubt that computers are capable of performing dictionary look-up or navigate a predefined network of lexical relationships; even current AI can do much more than just that.

    We can make no judgment, however, of whether AI is performing at human-like levels by only looking at simple examples, and the fact remains that AI currently has problems with certain more demanding cognitive tasks, such as with "common-sense physics" (as I mentioned previously, that is not just my opinion, it is a quote from those who are actually doing the work.) You have given no plausible explanation for how your concept of understanding, and of how it can easily be achieved, solves this problem, and in your only attempt to explain away why, if it is so easy, it remains an acknowledged problem in actual AI research, you implied that the whole AI community has consistently failed to see what is obvious to you.

    There is no reasonable doubt that AI currently has a problem with something here; I just don't know what it is called in your personal lexicon.

    Personal lexicons come up again in the issue of 'conceivable' vs. 'possible', where the definition I attempted of 'conceivable' apparently doesn't match yours. There is no point in getting into a "you say, I say" argument, but we don't have to: it is a straightforward fact that the distinction between 'conceivable' and 'possible' is widely accepted among philosophers and is central to Chalmer's p-zombie argument. I will grant that it is conceivable, and even possible, that you are right and they are all wrong, but I don't think it is probable.

    You would be more convincing if you could explain where the example I gave earlier, using the current status of the Collatz conjecture, goes wrong.
  • A Raybould
    86

    Even in Bostrom's simulation argument, neither brains nor minds are TMs: in that argument, I (or, rather, what I perceive as myself) is a process (a computation being performed), and what I perceive as being the rest of you is just data in that process. To confuse a process (in either the computational sense here, or more generally) with the medium performing the process is like saying "a flight to Miami is an airplane." A computation is distinct from the entity doing the computation (even if the latter is a simulation - i.e. is itself a computation - they are different computations (and even when a computation is a simulation of itself, they proceed at different rates in unending recursion.))

    I recognize that this loqution is fairly common - for example, we find Searle writing "The question is, 'Is the brain a digital computer?' And for the purposes of this discussion I am taking that question as equivalent to 'Are brain processes computational?" - but, as this quote clearly shows, this is just a manner of speaking, and IMHO it is best avoided, as it tends to lead to confusion (as demonstrated in this thread) and can prime the mind to overlook certain issues in the underlying question (for example, if you assume that the brain is a TM, it is unlikely that you will see what Chalmers is trying to say about p-zombies.) To me, Searle's first version of his question is little more than what we now call click-bait.
  • TheMadFool
    13.8k
    There is no point in discussing your own private definition of 'understandingA Raybould

    What's the public definition of understanding. Since no definition was agreed upon I thought I might just throw in my own understanding of understanding. What's your definition, if I may ask?

    you implied that the whole AI community has consistently failed to see what is obvious to you.A Raybould

    I don't know what's so uncomputable about understanding. As far as I can tell, all that's required are:

    1. Word-referent connection

    2. Pattern recognition

    Is there anything else to understanding? If there is I'd like to know. Thanks.

    'conceivable' vs. 'possible'A Raybould

    What is the difference between the two? If they're the same then it's true that conceivable if and only if possible.

    If they're different then it's possible that conceivable but not possible and possible but not conceivable. Please provide examples of both scenarios for my benefit. Thanks.
  • fishfry
    2.6k
    Even in Bostrom's simulation argument, neither brains nor minds are TMs: in that argument,A Raybould

    If the word simulation means something other than computation, you need to state clearly what that is; and it has to be consistent either with known physics; or else stated as speculative physics.

    I'll agree that Bostrom and other philosophers (Searle included) don't appear to know enough computability theory to realize that when they say simulation they mean computation; and that when they say computation they must mean a TM or a practical implementation of a TM. If not, then what?

    When we simulate gravity or the weather or a first person shoot-'em-up video game or Wolfram's cellular automata or any other simulation, it's always a computer simulation. What other kind is there?

    And when we say computation, the word has a specific scientific meaning laid out by Turing in 1936 and still the reigning and undefeated champion.

    Now for the record there are theories of:

    * Supercomputation; in which infinitely many instructions or operations can be carried out in finite time; and

    * Hypercompuation; in which we start with a TM and adjoin one or more oracles to solve previously uncomputable problems.

    Both supercomputation and hypercomputation are studied by theorists; but neither are consistent with known physical theory. The burden is on you to be clear on what you mean by simulation and computation if you don't mean a TM.

    I (or, rather, what I perceive as myself) is a process (a computation being performed), and what I perceive as being the rest of you is just data in that process.A Raybould

    But what do you mean by computation? Turing defined what a computation is. If you mean to use Turing's definition, then you have no disagreement with me. And if you mean something else, then you need to clearly state what that something else is; since the definition of computation has not changed since Turing's definition.


    To confuse a process (in either the computational sense here, or more generally) with the medium performing the process is like saying "a flight to Miami is an airplane."A Raybould

    I have done no such thing. I don't know why you'd think I did. A computation is not defined by the medium in which it's implemented; and in fact a computation is independent of its mode of execution. I genuinely question why you think I said otherwise.

    If you agree that you and I are "processes," a term you haven't defined but which has a well-known meaning in computer science with which I'm highly familiar, then a process is a computation. You can execute Euclid's algorithm with pencil and paper or on a supercomputer, it makes no difference. It's the same computation.

    A computation is distinct from the entity doing the computation (even if the latter is a simulation - i.e. is itself a computation - they are different computations (and even when a computation is a simulation of itself, they proceed at different rates in unending recursion.))A Raybould

    You're arguing with yourself here. I have never said anything to the contrary. A computation is independent of the means of its execution. What does that have to do with anything we're talking about?

    I recognize that this loqution is fairly common - for example, we find Searle writing "The question is, 'Is the brain a digital computer?'A Raybould

    Searle also, in his famous Chinese room argument, doesn't talk about computations in the technical sense; but his argument can be perfectly well adapted. Searle's symbol lookups can be done by a TM.

    And again, so what? You claim the word simulation doesn't mean computation; and that computation isn't a TM. That's two claims at odds with reality and known physics and computer science. The burden is on you to provide clarity. You're going on about a topic I never mentioned and a claim I never made.

    And for the purposes of this discussion I am taking that question as equivalent to 'Are brain processes computational?" - but, as this quote clearly shows, this is just a manner of speaking,A Raybould

    But a computation is a very specific technical thing. If I start going on about quarks and I say something that shows that I'm ignorant of physics and I excuse myself by saying, "Oh that was just a manner of speaking," you would label me a bullshitter.

    If you mean to use the word computation, you have to either accept its standard technical definition; or clearly say you mean something else, and then say exactly what that something else is.


    and IMHO it is best avoided, as it tends to lead to confusion (as demonstrated in this thread)A Raybould

    I'm not confused. My thinking and knowledge are perfectly clear. A computation is defined as in computer science. And if you mean that we are a "simulation" in some sense OTHER than a computation, you have to say what you mean by that, and you have to make sure that your new definition is compatible with known physics.


    and can prime the mind to overlook certain issues in the underlying question (for example, if you assume that the brain is a TM, it is unlikely that you will see what Chalmers is trying to say about p-zombies.)A Raybould

    I understand exactly what Chalmers is saying about p-zombies now that I re-acquainted myself with the topic as a result of this thread.

    But you're going off in directions.

    What do you mean by simulation, if not a computer simulation? And what do you mean by a computation, if not a TM?

    To me, Searle's first version of his question is little more than what we now call click-bait.A Raybould

    Whatever. I'm not Searle and he got himself into some #MeToo trouble and is no longer teaching. Why don't you try talking to me instead of throwing rocks at Searle?
  • A Raybould
    86

    What part of 'a computation is what a Turing machine does, not what it is' do you not understand? At least until we sort that out, I am not going to read any more of this jumble.
  • fishfry
    2.6k
    What part of 'a computation is what a Turing machine does, not what it is' do you not understand? At least until we sort that out, I am not going to read any more of this jumble.A Raybould

    Best you don't, since I couldn't have been more clear.

    A TM is not a physical device. It's an abstract mathematical construction. A computation, by definition, is anything that a TM can do. This isn't me saying this, it's Turing followed by 80 years worth of computer science saying that.

    If you think there's something that counts as a computation, that

    a) Can not be implemented by a TM; and

    b) Is consistent with known physics;

    then by all means tell me what that is to you.
  • A Raybould
    86
    I couldn't have been more clear.fishfry

    I rather suspect that's true, unfortunately.
  • fishfry
    2.6k
    I rather suspect that's true, unfortunately.A Raybould

    I haven't seen your handle much before. People who know me on this board know that I'm perfectly capable of getting into the mud. I'm sorely tempted at this moment but will resist the urge. I say to you again:

    A TM is not a physical device. It's an abstract mathematical construction. A computation, by definition, is anything that a TM can do. This isn't me saying this, it's Turing followed by 80 years worth of computer science saying that.

    If you think there's something that counts as a computation, that

    a) Can not be implemented by a TM; and

    b) Is consistent with known physics;

    then by all means tell me what that is to you.
  • A Raybould
    86

    People who know me on this board know that I'm perfectly capable of getting into the mud.fishfry

    I will treat that comment with all the respect it deserves.

    A TM is not a physical device. It's an abstract mathematical construction...fishfry

    Regardless, the question I asked a couple of posts ago applies either way.

    ...A computationfishfry
    ...but it is not the computation that the abstract machine is computing. I covered that in yesterday's post.
    Do you think that in Bostrom's simulated universes, it's TMs all the way down? Clearly not, as his premises don't work in such a scenario - there's a physical grounding to whatever stack of simulations he is envisioning.
  • fishfry
    2.6k
    Do you think that in Bostrom's simulated universes, it's TMs all the way down?A Raybould

    I discussed this at length. You chose not to engage with my questions, my points, or my arguments. You failed to demonstrate basic understanding of the technical terms you're throwing around. You repeatedly failed to define your terms "process" and "simulation" even after my repeated requests to do so.

    This is no longer productive.
  • A Raybould
    86

    First, let me make one thing clear (once again): The issue is not whether understanding is uncomputable, and if you think I have said so, you are either misunderstanding something I wrote, or drawing an unwarranted conclusion. The issue here is your insistence that there is nothing special about understanding and that it is a simple problem for AI.

    I have already given you a working definition that you chose to ignore. Ignoring me is one thing, but if, instead, you were to look at what real philosophers are thinking about the matter, you would see that, though it is a work in progress, at least one thing is clear: there is much more to it than you suppose.

    We can, however, discuss this matter in a way that does not depend on a precise definition. If, as you say, having AIs understand things is simple, then how come the creators of one of the most advanced AI programs currently written acknowledge that understanding common-sense physics, for one thing, is still a problem? Here we have a simple empirical fact that really needs to be explained away before we can accept that understanding (regardless of how you choose to define it) actually is simple - yet many posts have gone by without you doing so.

    Of course, anyone reading your 'explanation' of how to do machine understanding will have a problem implementing it, because it is so utterly vague. It most reminds me of many of the dismissive posts and letters-to-the-editor written after IBM's Watson's success in the Jeopardy contest: "it's just database lookup" was typical of comments by ignoramuses who had no idea of how it worked and by how much it transcended "just" looking up things in a database.


    If they're different then it's possible that conceivable but not possible and possible but not conceivable. Please provide examples of both scenarios for my benefit. Thanks.TheMadFool

    When I read this, I got the distinct feeling that I was dealing with a bot, which would be quite embarrassing for me, given the original topic of this thread! Things that tend to give away a bot include blatant non-sequiturs, a lack of substance, a tendency to lose the thread, and repetition of errors. You asked essentially the same question as this one here (complete with the same basic error in propositional logic) a few posts back, but when I provided just such an example (the same one as I had given more than once before) you ignored it and went off in a different direction, only to return to the same question now.

    I am tempted to just quote my reply from then, but I will spell it out more formally, so you can reference the first part you don't agree with:

    • P1 Anything that has been conceived of is conceivable.
    • P2 I have conceived of the proposition 'The Collatz conjecture is true.'
    • L1 'The Collatz conjecture is true' is conceivable. (P1, P2)
    • P3 I have conceived of the proposition 'The Collatz conjecture is false.'
    • L2  'The Collatz conjecture is false' is conceivable. (P1, P3)
    • P4 Either the Collatz conjecture is true, or it is false; it cannot be both, and there are no other alternatives.
    • L3 If the Collatz conjecture is true, then the conceivable proposition 'The Collatz conjecture is false' does not state a possibility. (L2, P4)
    • L4 If the Collatz conjecture is false, then the conceivable proposition 'The Collatz conjecture is true' does not state a possibility. (L1, P4)
    • C1 There is something that is conceivable but not possible. (L3, L4)
  • TheMadFool
    13.8k
    The issue here is your insistence that there is nothing special about understanding and that it is a simple problem for AI.A Raybould

    In other words, it's implied, you feel understanding is uncomputable i.e. there is "something special" about it and for that reason is beyond a computer's ability.

    If, as you say, having AIs understand things is simple, then how come the creators of one of the most advanced AI programs currently written acknowledge that understanding common-sense physics, for one thing, is still a problem?A Raybould

    I'll make this as clear as I possibly can. Do you think humans are different from machines in a way that gives humans certain abilities that are not replicable in machines? Do we, humans, not obey the laws of chemistry and physics when we're engaged in thinking and understanding? It seems to me that all our abilities, including understanding, arise from, are determined by, chemical and physical laws all matter and energy must obey. The point being there's nothing magical going on in thinking/understanding - it's just a bunch of chemical and physical processes. We're, all said and done, meat machines or wet computers.

    There's nothing physically or chemically impossible going on inside our heads. Hence, I maintain that thinking/understanding is, for sure, computable.

    When I read this, I got the distinct feeling that I was dealing with a bot, which would be quite embarrassing for me, given the original topic of this thread! Things that tend to give away a bot include blatant non-sequiturs, a lack of substance, a tendency to lose the thread, and repetition of errors. You asked essentially the same question as this one here (complete with the same basic error in propositional logic) a few posts back, but when I provided just such an example (the same one as I had given more than once before) you ignored it and went off in a different direction, only to return to the same question now.

    I am tempted to just quote my reply from then, but I will spell it out more formally, so you can reference the first part you don't agree with:

    P1 Anything that has been conceived of is conceivable.
    P2 I have conceived of the proposition 'The Collatz conjecture is true.'
    L1 'The Collatz conjecture is true' is conceivable. (P1, P2)
    P3 I have conceived of the proposition 'The Collatz conjecture is false.'
    L2  'The Collatz conjecture is false' is conceivable. (P1, P3)
    P4 Either the Collatz conjecture is true, or it is false; it cannot be both, and there are no other alternatives.
    L3 If the Collatz conjecture is true, then the conceivable proposition 'The Collatz conjecture is false' does not state a possibility. (L2, P4)
    L4 If the Collatz conjecture is false, then the conceivable proposition 'The Collatz conjecture is true' does not state a possibility. (L1, P4)
    C1 There is something that is conceivable but not possible. (L3, L4)
    A Raybould

    obscurum per obscurius

    Define the words "conceivable" and "possible" like a dictionary does.
  • A Raybould
    86
    In other words, it's implied, you feel understanding is uncomputable i.e. there is "something special" about it and for that reason is beyond a computer's ability.TheMadFool

    Absolutely not. As you are all for rigor where you think it helps your case, show us your argument from "there's something special about understanding" to "understanding is uncomputable."

    Hence, I maintain that thinking/understanding is, for sure, computable.TheMadFool

    As I have pointed out multiple times, that is not the issue in question. Here, you are just making another attempt to change the subject, perhaps because you have belatedly realised that you cannot sustain your original position? Until you have completed the above task, stop attempting to attribute to me straw-man views that I do not hold and have not advocated.

    obscurum per obscuriusTheMadFool

    Now you are just trolling, and using Latin does not alter that fact. Here we have a straightforward argument that you apparently don't agree with, but for which you cannot find a response.

    Define the words "conceivable" and "possible" like a dictionary does.TheMadFool

    I see you are reverting to bot-like behavior, as outlined in my previous post. We have been round this loop before. I see, from other conversations, that you frequently use demands for definitions to browbeat other people when you have run out of arguments, to take the discussion in a different direction... Well, it won't work here: I am not going to follow you in another run around the rabbit-warren until you have addressed the specific argument here.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.