• frank
    16.4k

    Intelligence is about capabilities, particularly in new situations. I don't see how transcendence, whatever that is, enters into it.
  • frank
    16.4k

    Yay! Thanks for reading it with me.
  • Leontiskos
    3.8k
    - I agree with the others who claim that you are mistaken in calling intelligence a psychological construct.
  • frank
    16.4k
    I agree with the others who claim that you are mistaken in calling intelligence a psychological construct.Leontiskos

    I have a feeling that like others, you will not flesh out whatever it is you're talking about.
  • Arcane Sandwich
    2.2k
    There are three important characteristics to this definition. First, when a person's intelligence is considered, it is in the context of their maximal capacity to solve novel problems, not a person's typically manifested intelligent behaviour. (...) Secondly, the essence of human intelligence is closely tied to its application in novel contexts (Davidson & Downing, 2000; Raaheim & Brun, 1985). This entails solving problems that a person has not previously encountered, rather than those with which they are already familiar. (...) Thirdly, human intelligence is underpinned by perceptual-cognitive functions (Thomson, 1919), which, at a basic level, encompass a range of mental processes, including attention, visual perception, auditory perception, and sensory integration (i.e., multiple modalities).Gilles E. Gignac, Eva T. Szodorai

    Hmmm...

    EDIT:

    Though our recommended abstract definition of human intelligence may help elucidate its conceptual nature, it lacks concreteness to be sufficiently useful to guide the development of corresponding psychometric measures of intelligence.Gilles E. Gignac, Eva T. Szodorai

    Yeah, this is a methodological problem. It's a methodological "bad thing", so to speak.

    EDIT 2:

    we propose defining artificial intelligence abstractly as the maximal capacity of an artificial system to successfully achieve a novel goal through computational algorithms.Gilles E. Gignac, Eva T. Szodorai

    Ok. And then they say:

    Our abstract definition of AI is identical to the definition of human intelligence we outlined above, with two exceptions. First, we replaced ‘human’ with ‘artificial system’ to reflect the fundamental distinction between organic, human cognitive processes versus synthetic, computer-based operations inherent in AI systems. Secondly, novel goals are specified to be achieved through the use of computational algorithms, not perceptual-cognitive processes.Gilles E. Gignac, Eva T. Szodorai
  • frank
    16.4k

    I guess they're saying that applying a known solution doesn't indicate intelligence. I was watching a YouTube of a bird using a piece of cracker as fish bait. It would drop the bit in the water and wait for a fish to come. If this is instinctual and all birds do it, it's not a sign of intelligence. But if the bird worked this out on it's own, learning, adapting, adopting new strategies, then it's intelligent.
  • Arcane Sandwich
    2.2k
    I guess they're saying that applying a known solution doesn't indicate intelligence. I was watching a YouTube of a bird using a piece of cracker as fish bait. It would drop the bit in the water and wait for a fish to comefrank

    I think I know what you're getting at. The example that I sometimes think about myself is fishing, when the fish thinks that a plastic bait is real fish food. Like, are the fish deluded? Are they imagining things when they see the lure? Is it pure instinct instead, like, "a mechanical thing"? If so, are they as mindless as a stone? Etc.

    If this is instinctual and all birds do it, it's not a sign of intelligence.frank

    It would be instinctual. "Programmed" behavior, in some sense. "Genetic programing", if you will. But I don't like to use computational metaphors too much.

    But if the bird worked this out on it's own, learning, adapting, adopting new strategies, then it's intelligent.frank

    Well, some animals can do just that. Some birds (crows, I think, or ravens, or something like that) have been studied in that sense, also some mollusks. Primates can obviously do such things without much difficulty.

    The conclusion of the article says the following, among other things:

    Despite not reaching the threshold of artificial intelligence, artificial achievement and expertise systems should, nonetheless, be regarded as remarkable scientific accomplishments, ones that can be anticipated to impact many aspects of society in significant ways.frank

    Not sure what the article's Main Point is, then.
  • SophistiCat
    2.3k
    This doesn't help with the logical fallacy of equivocation, for "the essential and enduring structure" of humans and computers are very far apart, both actually and epistemologically.Leontiskos

    No one said they were, so I am not sure whose fallacy you are attacking. I was just pointing out the emptiness of critique that, when stripped of its irrelevant elements, consists of nothing but truisms. I am skeptical of a so-called artificial general intelligence (AGI) arising in our time and along the existing lines of development, but my doubts arise from considerations of specific facts about AI (even if my knowledge is very limited in this area), not on dismissive truisms like this:

    Computer programs don't transcend their code.Leontiskos

    Well, of course they don't. That's what they are - code. And humans don't transcend whatever they are (which, if you happen to be of a naturalist persuasion, as Josh likely is, could be dismissively caricatured as "meat" or "dumb matter" or some such). So what?

    That which is designed has a determinate end. It acts the way it was designed to act.Leontiskos

    Another truism (as far as it is true). So, a hypothetical AGI would be designed to replicate and even surpass human intelligence. But that's not the desired conclusion, so now what? What is needed is not lazy dismissals, but getting down and dirty with what the actual limitations of actual AI might be.
  • Leontiskos
    3.8k
    I was just pointing out the emptiness of critique that, when stripped of its irrelevant elements, consists of nothing but truisms.SophistiCat

    I think you just haven't understood the argument, and thus are engaged in a "lazy dismissal." You could disagree with the claim that humans are able to "set their own norms," but you wouldn't be on very solid ground. Most people see that humans do have a capacity to set their own norms and ends, and that this explains the difference between a human and an animal. If we understand that capacity as intelligence, then the question is answered. AI does not set its own norms and ends.

    Your rejoinder that, "Humans are also bound by their 'architecture'," doesn't carry any weight unless we have reason to believe that human "architecture" also precludes the ability to set one's own norms and ends. The reason we argue from architecture in the case of the computer and not in the case of the human is because we understand the computer's architecture but do not understand human "architecture."

    dismissive truisms like this:SophistiCat

    What exactly is your complaint, here? That it is true? That I've relied on a general truth about computers in the argument?

    • Intelligence sets its own norms and ends.
    • Computers don't set their own norms and ends.
    • Therefore, computers are not intelligent.

    Do you have a counterargument?
    If you are just going to say, "That's too easy!," then I would point out that not every problem is hard.
  • Richard B
    444
    The nature of living systems is to change themselves in ways that retain a normative continuity in the face of changing circumstances. Cognition is an elaboration of such organismic dynamics. A.I. changes itself according to principles that we program into it, in relation to norms that belong to us. Thus, A.I. is an appendage of our own self-organizing ecology. It will only think when it becomes a self-organizing system which can produce and change its own norms. No machine can do that, since the very nature of being a machine is toJoshs

    Nice passage. Stuck this in Chat Smith to see if it confirms the veracity. And, there was no disagreement. But I guess this is expected based on what is expressed.
  • frank
    16.4k
    So just to review the definitions of intelligence mentioned in this article,

    1. Human intelligence is a psychological construct, which means it's an unobservable component of the explanation for certain behaviors, such as "the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” Alternately, we can define human intelligence as the "maximal capacity to achieve a novel goal successfully using perceptual-cognitive [processes]."

    2. AI is a computational construct, which means it's an aspect of explaining the behavior of device/software complexes which evolved in artificial domains and which, for the most part, do not develop skills through social interaction in the wider world.

    We'll go on now to examine 4 different attempts at defining AI:
  • frank
    16.4k
    First, Goertzel (2010); Goertzel & Yu, 2014) defined artificial intelligence as a system's ability to recognise patterns quantifiable through the observable development of actions or responses while achieving complex goals in complex environments.here

    I think the typical example of this would be the intelligence of a mobile robot which has to navigate irregular terrain. Doing this requires fluid intelligence, which would be the ability of a robot to identify its environment without directly comparing its visual data to a standard picture of some sort.

    Per the article, this definition is lacking because it doesn't emphasize novel problems, or problems the AI has never encountered before.
  • Harry Hindu
    5.2k
    I guess they're saying that applying a known solution doesn't indicate intelligence. I was watching a YouTube of a bird using a piece of cracker as fish bait. It would drop the bit in the water and wait for a fish to come. If this is instinctual and all birds do it, it's not a sign of intelligence. But if the bird worked this out on it's own, learning, adapting, adopting new strategies, then it's intelligent.frank
    Why would instinctual behaviors not be intelligent behaviors? Instinctual behaviors are developed over time with the trial and error being performed by natural selection rather than the individual organism.

    When learning a new task, like riding a bike, you eventually learn how to ride it effortlessly. That is to say, that you no longer have to focus on the movements of your feet and balancing on the seat. It is done instinctively once you master the task. Does that mean that intelligence is no longer involved in riding the bike?
  • frank
    16.4k
    Why would instinctual behaviors not be intelligent behaviors? Instinctual behaviors are developed over time with the trial and error being performed by natural selection rather than the individual organism.

    When learning a new task, like riding a bike, you eventually learn how to ride it effortlessly. That is to say, that you no longer have to focus on the movements of your feet and balancing on the seat. It is done instinctively once you master the task. Does that mean that intelligence is no longer involved in riding the bike?
    Harry Hindu

    The goal of this article is to review definitions that have been offered for human and artificial intelligence and pick out one that might allow for quantifiable comparison, so we want something we can test.

    It may be that natural selection is demonstrating something that could be called "intelligence" but we aren't assessing natural selection.

    I would say yes, once a task becomes second nature and you do it without thought, it's no longer a hallmark of intelligence. Maybe the learning phase involved intelligence.
  • SophistiCat
    2.3k
    I think you just haven't understood the argument, and thus are engaged in a "lazy dismissal." You could disagree with the claim that humans are able to "set their own norms," but you wouldn't be on very solid ground.Leontiskos

    I was addressing the argument - not the thesis about what is sine qua non for intelligence, but that it is out of reach for AI by its "very nature." No argument has been given for that, other than truisms, such as that AI cannot do what is outside its limits (no kidding!) But what are those limits? That seems like the crucial question to answer, but personal prejudices are all we get.

    dismissive truismsSophistiCat

    What exactly is your complaint, here? That it is true?Leontiskos

    That it is empty.
  • Leontiskos
    3.8k
    That it is empty.SophistiCat

    How is it empty if it supports the second premise of the argument that you ignored?

    Truths about the nature of computers may be "truisms" in that they are obvious, but if you don't understand the implications of such truths then they are less obvious to you than you suppose. And if you won't address the arguments that draw out those implications then I don't know what to tell you.

    I was addressing the argument - not the thesis about what is sine qua non for intelligence, but that it is out of reach for AI by its "very nature."SophistiCat

    But the sine qua non of setting one's own norms [and ends] is the premise used to draw the conclusion that it is inherently out of reach for AI. That sine qua non isn't separate from the argument.

    Given that there is a valid syllogism at hand, I think the only question is what to do with it. "The syllogism relies on a truism" is not a counterargument. And while I am glad that you agree with my "truisms," not everyone does.
  • frank
    16.4k
    A few more efforts at defining AI from here:

    1. "Chollet (2019, p. 27) defined the intelligence of a system as “a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty.”

    2. "Wang (2022, p. 35) defined intelligence as “the ability of an information processing system to adapt to its environment while working with insufficient knowledge and resources.”"

    3. "Legg and Hutter (2007b, p. 402) defined intelligence as “an agent's ability to achieve goals in a wide range of environments”"

    Chollet's definition emphasizes learning, while Wang, Legg, and Hutter emphasize adaptation in the face of a lack of prior exposure, again coming back to coping with novelty as a central mark of intelligence.
  • Leontiskos
    3.8k
    3. "Legg and Hutter (2007b, p. 402) defined intelligence as “an agent's ability to achieve goals in a wide range of environments”"frank

    It sounds like the idea is to conceive of AI as a "soulless" human. So that it has no goals of its own, but if someone gives it a task/goal then it will be able to complete it. A super-duper slave. And its ability to complete arbitrary goals is what makes it intelligent. It is a hypothetical imperative machine which not only provides information about how to achieve any given end, but in fact achieves it.
  • Bob Ross
    2k


    People don't have subjective experiences.

    This is patently false; and confused consciousness with sentience and (perhaps) awareness. An AI does not have conscious experience even if they are sentient in the sense that they have awareness.

    The solution here, apparently, in this OP to the hard problem of consciousness is to radically deny the existence of consciousness in the first place; which, I for one, cannot muster up the faith to accept when it is readily available to me introspectively that it does exist.
  • Bob Ross
    2k


    @frank

    A super-duper slave.

    I am predicting that we are going to reinvent slavery with AI; since it is feasible that, although they are not conscious, these sophisticated AIs will be sufficiently rational and free in their willing to constitute persons, and I don't think humanity is going to accept that they thereby have rights.
  • Arcane Sandwich
    2.2k
    Though our recommended abstract definition of human intelligence may help elucidate its conceptual nature, it lacks concreteness to be sufficiently useful to guide the development of corresponding psychometric measures of intelligence. — Gilles E. Gignac, Eva T. Szodorai


    Yeah, this is a methodological problem. It's a methodological "bad thing", so to speak.
    Arcane Sandwich

    In my admittedly ignorant opinion on such matters (how to best define "human intelligence", "artificial intelligence", and just "intelligence"), this is the main problem that the authors of the article have right now. Until they solve this specific problem, or unless they can meaningfully quantify human intelligence and artificial intelligence at the same time, and in the same sense, this discussion won't advance much in terms of new information or new discoveries.
  • frank
    16.4k
    It sounds like the idea is to conceive of AI as a "soulless" human. So that it has no goals of its own, but if someone gives it a task/goal then it will be able to complete it. A super-duper slave. And its ability to complete arbitrary goals is what makes it intelligent. It is a hypothetical imperative machine which not only provides information about how to achieve any given end, but in fact achieves it.Leontiskos

    I suppose so. For the purposes of this paper, intelligence will be tested by presenting a novel problem to a subject and watching the subsequent behavior. They aren't trying to test for autonomy in goal setting, although I guess they could. They just aren't considering that as a requirement for what they're calling intelligence.

    I may be causing confusion because I've drifted somewhat from the OP. I launched off into what we really mean by AI, how we might think about comparing AI's to humans, etc.
  • frank
    16.4k

    This isn't about the hard problem. Did you watch the video in the OP? The OP is about Hinton's thoughts about the sentience of AI. He's a tad eliminative, poor guy.
  • Arcane Sandwich
    2.2k
    I may be causing confusion because I've drifted somewhat from the OP. I launched off into what we really mean by AI, how we might think about comparing AI's to humans, etc.frank

    Then let me ask you this, frank. Does it make sense to use the word "intelligence" for an inorganic object to begin with? What I mean by that is that the concept of intelligence might be entirely biological, as in, in order to be intelligent in the literal sense, you need to have central nervous system to begin with. Any other use of the word "intelligence" is like the use of the word "horse" to refer to a bronze statue of a horse. It's not really a horse, it's just a statue.
  • Harry Hindu
    5.2k
    The goal of this article is to review definitions that have been offered for human and artificial intelligence and pick out one that might allow for quantifiable comparison, so we want something we can test.

    It may be that natural selection is demonstrating something that could be called "intelligence" but we aren't assessing natural selection.

    I would say yes, once a task becomes second nature and you do it without thought, it's no longer a hallmark of intelligence. Maybe the learning phase involved intelligence.
    frank
    Then not all brain processes are intelligent processes? It seems to me that you are implying that intelligence requires consciousness. If that is the case then why include artificial intelligence and not natural selection for comparison? It may be that AI is demonstrating something that could be called "intelligence".

    Maybe you should look at intelligence as a process and define the necessary components of the process to then say which processes are intelligent and which are not.
  • frank
    16.4k
    Then let me ask you this, frank. Does it make sense to use the word "intelligence" for an inorganic object to begin with? What I mean by that is that the concept of intelligence might be entirely biological, as in, in order to be intelligent in the literal sense, you need to have central nervous system to begin with. Any other use of the word "intelligence" is like the use of the word "horse" to refer to a bronze statue of a horse. It's not really a horse, it's just a statue.Arcane Sandwich

    Why would you reserve the word "intelligent" for biological entities?
  • Arcane Sandwich
    2.2k
    Why would you reserve the word "intelligent" for biological entities?frank

    Why would someone reserve the word "horse" for a living creature and not a bronze statue that just looks like one, without being one?
  • frank
    16.4k
    Maybe you should look at intelligence as a process and define the necessary components of the process to then say which processes are intelligent and which are not.Harry Hindu

    Intelligence just isn't the kind of thing that can be defined as a process. When we talk about intelligence, we're explaining behavior. "He's so intelligent, he invented massively parallel processing" Intelligence is part of an explanation.
  • frank
    16.4k
    Why would someone reserve the word "horse" for a living creature and not a bronze statue that just looks like one, without being one?Arcane Sandwich

    The thing is, you're starting from the constitution of a thing, and progressing from there to whether it's intelligent. I've been following this article that says start with behavior. I'm not seeing why we should start with constitution. Why would we?
  • Arcane Sandwich
    2.2k
    The thing is, you're starting from the constitution of a thing, and progressing from there to whether it's intelligent. I've been following this article that says start with behavior. I'm not seeing why we should start with constitution. Why would we?frank

    That's a good question, and I don't know the answer to it.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.

×
We use cookies and similar methods to recognize visitors and remember their preferences.