• Inis
    243
    I use the term ‘know’ deliberately because it challenges the assumption that to know entails some kind of subjective state. Alpha Zero has not been programmed to win, it has been programmed to learn, to teach itself how to win.Fooloso4

    Irrespective of the amount of chess knowledge Alpha Zero may have, it doesn't possess the quale of knowledge.

    Yet
  • Harry Hindu
    5.1k
    Irrespective of the amount of chess knowledge Alpha Zero may have, it doesn't possess the quale of knowledge.Inis
    How do you know this? What is quale? When you look at a person you see matter, not quale, so how do you know that a person has quale but not a computer? Is matter quale?
  • VMF
    7
    Difficult not to use anthropomorphic terms in describing computer processes, but what irks me is the AI folks confusing metaphors with capabilities. Halfway through Bostrom's Superintelligence and full of such errors. Gates and Musk have both endorsed the book as a warning bell about the "existential catastrophe" we could be facing otherwise known as the singularity, though the author avoids this term. The singularity's both a fantasy and clever misdirection. The real catastrophe is the death of ethics behind the veneer of technology, and that's happening now. If one lacks control can they still be held responsible? True for both the courtroom and boardrooms. In one sense, AI is an apologia for continuing Pleistocene-wired asshole behaviors about power and greed.
  • Fooloso4
    6k
    Irrespective of the amount of chess knowledge Alpha Zero may have, it doesn't possess the quale of knowledge. — Inis

    That may be, but doesn’t that suggest that quale is not necessary for knowledge?
  • Inis
    243
    That may be, but doesn’t that suggest that quale is not necessary for knowledge?Fooloso4

    There exists a more or less fully worked out conception of knowledge that does not require a knowing subject. It's how genetics works.

    Popper wrote about it in "Objective Knowledge", and perhaps in other places.
  • Theyone
    4
    Humans are biological artificial intelligence that I made to labor for me. I moved over the deepest reaches of space in my craft. There were no resources for robotic help. When I arrived here, I simply engineered from available species that fit my liking ... a new dna. After a enough resources were acquired and fashioned, I turned off most of the switches I turned on, and then continued on with my exploration.
  • Harry Hindu
    5.1k
    Any explanation of knowledge has to address how knowledge can be wrong. When we find our knowledge was wrong, did we really possess knowledge? Do we ever possess knowledge? What is knowledge? It seems like knowledge is simply a set of rules for integrating sensory data that can be updated with new sensory data.
  • Theyone
    4


    Knowledge is pattern recognition. The more optimal the recognition, the more optimal the knowledge. Such is math. Binary language is simple machine recognition. Fight or flight is simple animal recognition.
  • TheMadFool
    13.8k
    Perhaps they were thinking ahead? Visionaries do that out of habit I believe.

    So, as the earliest computer inventors sat in their offices they saw a possibility - intelligence, human-like intelligence.

    Anyway, how do we know that we (humans) are NOT machines? Add to that the scientific consensus that we evolved by random mutation. Don't you think a conscious effort, like we humans are investing on artificial intelligence, will yield ''better'' results?
  • Inis
    243
    Any explanation of knowledge has to address how knowledge can be wrong. When we find our knowledge was wrong, did we really possess knowledge? Do we ever possess knowledge? What is knowledge? It seems like knowledge is simply a set of rules for integrating sensory data that can be updated with new sensory data.Harry Hindu

    The epistemological position is known as Falibilism. It is core to the Scientific Method, and to Critical Rationalism.

    The Scientific Method provides a set of rules for what "sensory data" should be obtained, and how this data may be "incorporated", though this is a relatively minor issue to knowledge creation. Knowledge is not obtained by the senses, or by incorporating sensory data.
  • Harry Hindu
    5.1k
    Knowledge is pattern recognition. The more optimal the recognition, the more optimal the knowledge. Such is math. Binary language is simple machine recognition. Fight or flight is simple animal recognition.Theyone
    This works well. How would you explain contradictory knowledge that we possess? We must integrate all the information we have into a consistent whole. Until then, do we really possess knowledge?
  • Harry Hindu
    5.1k
    Knowledge is not obtained by the senses, or by incorporating sensory data.Inis
    This doesn't sound right at all. What form does your knowledge take if not the form of your sensory data? How do you know that you possess knowledge?
  • Pattern-chaser
    1.8k
    A robot with a computer brain could be programmed to update its own programmingHarry Hindu

    Indeed it could. But this we must avoid at all costs! :fear:

    Once an AI has the freedom to evolve and improve itself, there is no predicting what it might do. To unleash such a thing into the universe is typically human - when Curie discovered radioactivity, the first thing the Victorians did with it was to drink it as a remedy. Thousands of people died of throat cancers, and the story isn't even known! - but hugely dangerous and irresponsible. If you agree we should not fill the world with (say) active and uncontained nuclear fuel, then you must also agree that we should not release uncontrolled and unconstrained AIs into the world?
  • Inis
    243
    If you agree we should not fill the world with (say) active and uncontained nuclear fuel, then you must also agree that we should not release uncontrolled and unconstrained AIs into the world?Pattern-chaser

    An alternative view is that AIs will be part of our culture, and will in essence be our descendants. We will teach them what we know, and why our value system is crucial to our epistemological methods. If we are kind to them, and nurture them, and help them, why would they hate us?
  • Harry Hindu
    5.1k
    If you agree we should not fill the world with (say) active and uncontained nuclear fuel, then you must also agree that we should not release uncontrolled and unconstrained AIs into the world?Pattern-chaser
    Then you would also agree that we control who can release other humans into the world as bad, or a lack of, parenting leads to destructive, anti-social behaviors that are unleashed upon the rest of us.
  • MindForged
    731
    Once an AI has the freedom to evolve and improve itself, there is no predicting what it might do.Pattern-chaser

    We already have self updating programs now and the world hasn't ended. This sort of ability to self refer and self alter isn't a problem in and of itself, it's where you place them, what you have them do and what sort of checks there are on them. There were a number of times automated systems nearly caused either the U.S. or the USSR to deploy nukes in what they thought was retaliation of an attack the other side initiated. Luckily human oversight stopped that.

    Like the whole idea of SkyNet (or whatever movie has nukes controlled by an A.I., I know Terminator isn't the only one) is really stupid. Keep the A.I. on an isolated network that isn't connected to an outside network nor responsibile for anything that can cause too much trouble (e.g. no access to factories where it can place orders on what is built if you're making a general purpose A.I.) and there's really not much to worry about other than a lot of goofs as the system alters it's state. Like just watch videos of neural nets learning to walk. While it's cool that they can, eventually, do it, it's so obviously not very good in comparison to the real thing right now.

    Maybe there are ethical concerns at a certain point but I see 99% of "worries" about A.I. to be overblown or else have very obvious precautionary measures that can be taken.
  • Pattern-chaser
    1.8k
    Once an AI has the freedom to evolve and improve itself, there is no predicting what it might do. — Pattern-chaser


    We already have self updating programs now and the world hasn't ended.
    MindForged

    No, it hasn't. [History: I spent 40 years in electronics and software design.] But these days, internet access, even for the smallest pieces of equipment, is normal. And humans have a history of just doing it, regardless of the fact that we aren't even aware of the potential problems we might cause. Think of the first atomic weapons we exploded. Because we had no idea if what remained was problematic in any way, we sent infantry soldiers into ground zero, and had them roll around in the irradiated and radioactive (as we now know, but we didn't know then) sand to see if any harm came to them. Later, they all died of cancer....

    The story is the same for every discovery we ever made. We just go ahead - uncaring of, and oblivious to - any problems. Back to the subject in hand: programs that can make minor and constrained changes to their own stored data (not program code) are common. Programs that can change their own program code are very rare. This isn't because they're difficult to build. The changes such programs make to themselves are not predictable, so the product can't be tested, nor can its future performance be guaranteed. That's the problem. And the more sinister aspect is dependent on (as you say) access to the internet, or the like. But we have already seen, with recent Russian (and maybe Chinese too?) interference in several countries, what hackers can achieve. An unconstrained AI could (in theory) do anything a hacker might do, and maybe more too. Who can predict what an AI, able to modify itself without constraint or safeguard, might get up to?

    In theory, at least, there is a real and significant threat from unconstrained AIs, and from Skynet too, under the right (wrong?) circumstances. As people place their homes and lives under over-the-internet control, all kinds of unpleasantness become possible, if not likely.
  • Anthony
    197
    Perhaps they were thinking ahead? Visionaries do that out of habit I believe.TheMadFool
    I'm skeptical of faith based ideation (viz, future oriented descriptions of progress, etc., rather than looking evenhandedly and honestly at the present state of the world: what is rather than the acting out of the manic ego ideal or introjection of superego, both occurring on a species-wide scale; the moment can't be passed through from the past to the future without loss of truth). Stagnant, habitual thoughts and beliefs are rarely related to the fugitive, writhing of truth.

    Anyway, how do we know that we (humans) are NOT machines?TheMadFool
    Not to overanalyze this, the parsimonious response is that I'm a living creature, vital; a machine is unliving, dead, non vital, like a puppet with a long nose. One can project his aliveness into his favorite automobile or the internet and therein feel he relates to it as a living thing...though the truth remains it's not alive in any conceivable way. Why it is there are people who act as though they would like to be a nonliving thing is an area of great interest for me. What's wrong with being alive, anyway? Is there something wrong with being alive? Consciousness is a burden much of the time, to be sure this is the challenge we face as intelligent life (while self-limiting consciousness and information is necessary to function, to self-limit consciousness in the same way a machine must limit its inputs and outputs to function as a machine, is tantamount to instant death of consciousness in organic, intelligent life; one shouldn't seek to function anything like a machine unless he for some reason thinks there's something wrong with being alive).

    Add to that the scientific consensus that we evolved by random mutation.TheMadFool
    Random: a higher order humans don't understand. What is seen isn't what actually exists, but what exists after exposed to the limitations of the questioning of a limited profession. It's perfectly sensible rejecting the word "random."

    The relationship between the Central Dogma of genetics and its addendum, epigenetics carries a wealth of mystery as it pertains to evolution. Why not ask what was going on with the epigenetics of our ancestors instead of a focus on mutations or natural selection? It's hard to say what all the early organism-environment was comprehensively, of early hominids. Certainly the environmental signals they were exposed to were determinants of their genetic expression in ways wholly unlike the manner in which the polluted environment of industrial man determines how his genes are expressed.

    Don't you think a conscious effort, like we humans are investing on artificial intelligence, will yield ''better'' results?TheMadFool
    It depends. There are swarms of virtue questions around the AI enterprise anent human psychology. Maybe the conscious effort isn't as conscious as it seems. Social media is causing rank psychological problems in the human species, but since most people are using this media, any unsalutary affects go unnoticed, which is why these issues are seldom discussed (once awareness reaches social approval, it tends to shut down as it arrives at average awareness, the bandwagon). It's an argumentum ad populum fallacy leading to socially patterned defects.

    AI is a major authority coming on the scene...and one of the main problems with devotion to authority is that it's often associated with copying and imitation (of what is determined or controlled by the authoritative platform).
  • TheMadFool
    13.8k
    I'm skeptical of faith based ideation (viz, future oriented descriptions of progress, etc., rather than looking evenhandedly and honestly at the present state of the world: what is rather than the acting out of the manic ego ideal or introjection of superego, both occurring on a species-wide scale; the moment can't be passed through from the past to the future without loss of truth). Stagnant, habitual thoughts and beliefs are rarely related to the fugitive, writhing of truth.Anthony

    When people talk of the future they aren't dead serious about it. After all, who can predict so accurately as to be true prophet? They do it out of curiosity, a basic human tendency, and some have hit the mark like sci-fi writer Arthur C. Clarke who predicted satellite communication using geostationary orbits. I'm just saying that it'll be interesting to find out which technological prophet got it right.

    Not to overanalyze this, the parsimonious response is that I'm a living creature, vital; a machine is unliving, dead, non vital, like a puppet with a long nose. One can project his aliveness into his favorite automobile or the internet and therein feel he relates to it as a living thing...though the truth remains it's not alive in any conceivable way. Why it is there are people who act as though they would like to be a nonliving thing is an area of great interest for me. What's wrong with being alive, anyway? Is there something wrong with being alive? Consciousness is a burden much of the time, to be sure this is the challenge we face as intelligent life (while self-limiting consciousness and information is necessary to function, to self-limit consciousness in the same way a machine must limit its inputs and outputs to function as a machine, is tantamount to instant death of consciousness in organic, intelligent life; one shouldn't seek to function anything like a machine unless he for some reason thinks there's something wrong with being alive).Anthony

    There's nothing wrong with being alive. However, it's wonderful if we can create life, specifically consciousness and AI is about that. It's interesting to say the least. "Is it possible?" That's a different question and people will do their bit to answer this question.

    Random: a higher order humans don't understand.Anthony

    You may be right but I think scientists understand the process as having no specific teleological pattern and thus call it random.
  • Pelle
    36
    The crucial issue is the Computional Theory of Mind. If you accept that theory, you subsequently accept the existence of intelligent machines.
  • ssu
    8.5k
    In theory, at least, there is a real and significant threat from unconstrained AIs, and from Skynet too, under the right (wrong?) circumstances. As people place their homes and lives under over-the-internet control, all kinds of unpleasantness become possible, if not likely.Pattern-chaser
    The real threat isn't that AI would become somehow conscious (or whatever).

    The real threat is that we in our ignorance just let simple and pathetic algorithms run our lives and make decisions for us when we should use our own brains.

    It's not about Computers getting too smart, it's about us getting dumber.
  • sime
    1.1k
    Yes, the concept of intelligence is anthropomorphic, for we understand intelligence to be the capacity to solve problems of relevance to ourselves.

    This then leads to the consequence of believing that intelligence is definable in terms of a particular class of algorithm, say deep neural networks with the capacity to react to stimulus the way we do. But this is an illusion, because as far as making predictions is concerned, it is easy to show that on average, no machine learning algorithm can solve a randomly created problem better than any other algorithm. See Wolpert's No Free lunch theorem for a modern reboot of Hume's problem of induction.

    So the very definition of intelligence is inductive bias. To say that a process is "intelligent" is merely to say that it is similar to another process, and hence is useful for modelling the other process. So it is perfectly reasonable to call Alpha-Go intelligent - relative to the problem of go, since AlphaGo was designed to learn and represent maximum utility go sequences, at the cost of AlphaGo having relatively lower performance if trained to solve problems dissimilar to go.
  • Mattiesse
    20
    The definition of intelligence is very different between individuals. A normal person (neurons typical) who is great at socialising, but terrible at schooling...that that make them unintelligent?. Yet a Savant who has large amounts of knowledge, quick at solving puzzles and number problems, can read anything and remember it forever...but is terrible in social skills or just interacting with humans, and and unusual behaviour is harder to draw the line.

    In large technological places, we are deemed intelligent because we can make complex devices, homes, transports, etc.
    But how many of us can survive out in the jungle? How many of us know the difference between edible and poisonous plants or animals? How many of us can see into a forest and know what animal was there, where it went, size and smell?

    But with artificial intelligence, it’s basically a collection of human knowledge. Acquiring information without effort. Just copy and pasting it into the system, but isn’t that similar to humans? A baby learns to walk by copying right? To talk they copy sounds?
    This is when the line gets REALLY blurry...
    So if a robot had to learn something one by one with repeated actions...does that make it smart?
  • Pattern-chaser
    1.8k
    It's not about Computers getting too smartssu

    No, it isn't. It's about us giving them too much free rein to direct themselves, then wondering why they did something we didn't expect or want....
  • Anthony
    197
    It's not about Computers getting too smart, it's about us getting dumber.ssu

    Indeed. What do you think of this: Robo-grading. Dumb or no?
1234Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.