• frank
    16.7k
    I think a key quality of intelligence is the ability to solve problems - to conceive of new ideas from an amalgam of prior experiences. Intelligence seems to have this dual aspect of being a mental process of blending together prior experiences to solve present problems and the fuel of experiences to feed the process - the more experiences you have the more fuel you have to produce more novel ideas. This is why most intelligent people are curious. They seek out new experiences to fuel their need to solve problems.Harry Hindu

    I think you're pretty much nailing the important points from the definition I'm getting out of this article. Intelligence is about problem solving, especially finding solution to problems one has never seen before.
  • Harry Hindu
    5.2k
    I think you're pretty much nailing the important points from the definition I'm getting out of this article. Intelligence is about problem solving, especially finding solution to problems one has never seen before.frank
    Has natural selection solved problems of survival using unique bodies and behaviors that fill specialized niches in the environment? Now I do not see natural selection as an intended, or goal-directed process, even though it can appear like it is. Natural selections solves problems, but unintentionally. Would the presence of intention, or goals, need to be present as a qualifier for intelligence? Intelligence would include the process of maintaining an end goal in the mind in the face of present obstacles (sub-goals).
  • frank
    16.7k

    I think we would agree that when natural selection solves a problem, it's merely following the path of least resistance. The question is: is human intelligence any different from that? If so, how? Is there something supernatural lurking in our conceptions of intelligence?
  • Leontiskos
    3.8k
    - I am tired of repeating myself as well; tired of asking for arguments rather than dismissals with vague allegations such as "truisms." We can leave it there.
  • Harry Hindu
    5.2k

    I think Steven Pinker's response when asked what intelligence is is applicable here:
    I think intelligence is the ability to use knowledge to attain goals. That is, we tend to attribute intelligence to a system when it can do multiple things, multiple steps or alternative pathways to achieving the same outcome: what it wants. I’m sitting here right now in William James Hall, and my favorite characterization comes from William James himself, the namesake of my building, where he said, “‘You look at Romeo pursuing Juliet, and you look at a bunch of iron filings pursuing a magnet, you might say, ‘Oh, same thing.’ There’s a big difference. Namely, if you put a card between the magnet and filings, then the filings stick to the card; if you put a wall between Romeo and Juliet, they don’t have their lips idiotically attached to opposite sides of the wall.” Romeo will find a way of jumping over the wall or around the wall or knocking down the wall in order to touch Juliet’s lips.’ So, with a nonintelligence system, like physical objects, the path is fixed and whether it reaches some destination is just accidental or coincidental. With an intelligent agent, the goal is fixed and the path can be modified indefinitely. That’s my favorite characterization of intelligence. — Steven Pinker

    Now, a determinist might say that the path is also fixed and making a distinction between the causal power of "non-physical" knowledge and "physical" objects would be a false dichotomy - a product of dualism. So a more intelligent system would be one that takes more complex paths to reach some goal, or a more complex causal sequence to reach some effect where a less intelligent system would take simpler paths to reach some goal or effect.

    One might say that the ultimate goal is survival and every other goal is a subgoal. Our lives are a path to survival until we ultimately fail.
  • frank
    16.7k
    @Harry Hindu

    With an intelligent agent, the goal is fixed and the path can be modified indefinitely. That’s my favorite characterization of intelligence. — Steven Pinker

    I really like that. In the article the guy says, with regard to a goal, intelligence is "what you do when you don't know what to do."
  • Pierre-Normand
    2.6k
    Now, a determinist might say that the path is also fixed and making a distinction between the causal power of "non-physical" knowledge and "physical" objects would be a false dichotomy - a product of dualism. So a more intelligent system would be one that takes more complex paths to reach some goal, or a more complex causal sequence to reach some effect where a less intelligent system would take simpler paths to reach some goal or effect.

    One might say that the ultimate goal is survival and every other goal is a subgoal. Our lives are a path to survival until we ultimately fail.
    Harry Hindu

    This indeed looks like the sort of genocentric perspective Pinker would favor. Like E. O. Wilson and Richard Dawkins, he seems to believe that genes hold culture (and, more generally, human behavior) on a leash. This view indeed doesn't make him a determinist since he concedes that human intelligent behavior is flexible enough to allow for us to find alternative paths for achieving predetermined goals. But his genocentrism leads him to characterise intelligence in a way that makes little distinction between (mere) animal and human behavior. Although human behavioral proclivities that (for evolutionary purposes) tend to achieve survival goals may be more entrenched than others, rational deliberation often leads us to revise our goals and not merely find alternative ways to achieve them. Humans are sensitive to reasons for abstaining for doing things that would enhance their evolutionary fitness when this evolutionary "goal" conflicts with our values, loyalties, etc. By contrast, Pinker's opposition to the blank slate Lockean conception of the human mind plays into his own conception of the role of human nature and human instincts. He seems to overlook that human practical rationality not merely enables us to achieve our goals more flexibly but also for reflecting on their adequacy and revising them in such a way that they can override (or remain in tension with) our natural proclivities. There is a reason why we hold the naturalistic fallacy to be, indeed, a fallacy.

    In short Pinker's conception of intelligence, or rationality, echoes Hume's pronouncement in the Treatise on Human Nature that "reason is, and ought only to be the slave of the passions". But I am reminded of David Wiggins who, in various writings, stresses the evolution and refinement of Hume's thoughts about the passions (and the nature of reason itself) between the Treatise (that this famous pronouncement if from) and An Enquiry into Human Understanding. In the latter, Hume (according to Wiggins) grants reason more autonomy that he had in the Treatise (where he thought of it more in instrumental terms) and rather stresses the inter-dependency that reason have with the passions. This inter-dependency means that reason can't be dispassionate, but doesn't entail that the passions are prior and can't be shaped by reason just as much as reason can be directed by the passions. So, my opposition to Pinker's conception is akin to charging him with having taken stock of the ideas in Hume's Treatise and not having let them mature to the Enquiry stage. (Wiggins' take on Hume, and on the evolution of Hume's thought between the Treatise and the Enquiry, also is broadly shared by Annette Baier and Christine Korsgaard. Thanks to GPT4o for pointing that out!)
  • Harry Hindu
    5.2k
    I really like that. In the article the guy says, with regard to a goal, intelligence is "what you do when you don't know what to do."frank
    I don't think contradictions are helpful definitions. Intelligence is the act of bringing together unrelated knowns together to come up with a new, useable known to achieve some goal. New ideas are always an amalgam of existing ones.

    Humans are sensitive to reasons for abstaining for doing things that would enhance their evolutionary fitness when this evolutionary "goal" conflicts with our values, loyalties, etc.Pierre-Normand
    Sure, when resources are plentiful your goal becomes survival in a social environment, but when resources are scarce, values, loyalties, etc. are thrown out the window in favor of other goals.

    As Jerry Coyne put it,
    "Remember that the currency of selection is not really survival, but successful
    reproduction. Having a fancy tail or a seductive song doesn’t help you survive, but may increase your chances of having offspring—and that’s how these flamboyant traits and behaviors arose. Darwin was the first to recognize this trade-off, and coined the name for the type of selection responsible for sexually dimorphic features: sexual selection. Sexual selection is simply selection that increases an individual’s chance of getting a mate. It’s really just a subset of natural selection, but one that deserves its own chapter because of the unique way it operates and the seemingly nonadaptive adaptations it produces.
    — Jerry Coyne
    I would argue again that if resources are plentiful and the environment is stable, traits like the peacock's tail can evolve. If not, procreation is the last thing on the organism's mind. It takes intelligence to find food or a mate. It takes intelligence to navigate one's environment either natural or social (I would say that social is part of the natural. Everything we do is natural, but that is not saying that what is natural is good or bad. It's just a statement of fact, not a moral statement).
  • Mapping the Medium
    366
    There is the kind of intelligence that is statistically pattern oriented. ... You know this kind, from when you were a child and given an illustration puzzle of shapes and told to pick out the one that doesn't belong (binary negation). But when those puzzles became more complex, did you ever say to yourself that sometimes it is a gray area? ... As in belong how? ... When looking at a group of people, wouldn't you be more inclined to look for differences depending on your past experiences and cultural influences. This is where IQ tests get wonky. ... Existence and reality are complex. Proper negation is not binary and necessarily takes into account many influences that are far beyond statistical and binary. The nominalistic foundation of our current AI is the cause of AI hallucinations and random switching of languages in its processing attempts. ... So, I suppose the question is really about what your philosophical definition of intelligence is. ... If nominalistic AI is enhanced with analog chips and scales to what some refer to as AGI, there will be no cohesion from proper negation, only static statistical patterns that will not evolve properly with the folding and unfolding complexities of reality. ... That's what's coming. ... And that's my probably-not-wanted 2 cents on this topic.

    Our lives are a path to survival until we ultimately fail.Harry Hindu

    No doubt.
  • ENOAH
    929
    Hinton's argument is basically that AI is sentient because they think like we do. People may object to this by saying animals have subjective experience and AI's don't,frank

    My objection would be nearly the opposite. AI might think like we do. Other animals might not. But animals are sentient. AI are not. Because AI doesn't feel like we and other animals do. Any thoughts, ideas etc., which AI might have, might be 'generated' by 'itself',seem organic, might not only resemble, but even exceed our own. But any pleasure/displeasure AI has, and any corresponding drives, cannot resemble nor exceed our own, or that of many animals, without being obviously superficial, even tacky. There is no drive to avoid discomfort, or pain, to bond with others of the species, reproduce, and survive; no organs besides the thinking and perceiving brain, being replicated.

    It's not so much what that says about AI that interests me, but what it says about what humans and AI have in common, not sentience, but thinking. Unlike the other animals, human thinking is an artificial intelligence. Perhaps, a leap of logic, on its face, but perhaps worthy of deeper contemplation.
  • frank
    16.7k
    Unlike the other animals, human thinking is an artificial intelligence.ENOAH

    That's a fascinating thought. Sentience isn't equivalent to human intelligence. It's something other than that. I think human thought is driven by emotion, which as you say is tied up in interaction with other people primarily, but emotion is part of interacting with the world, and much of that is biological at its base.

    But computers have analog to digital converters to "sense" the world. Is this a kind of feeling? I mean, we could engineer something like a sympathetic nervous response for an AI. Would it be sentient then? I think I might be on the verge of asking a question that can't be answered.
  • Mapping the Medium
    366
    But computers have analog to digital converters to "sense" the world. Is this a kind of feeling? I mean, we could engineer something like a sympathetic nervous response for an AI. Would it be sentient then? I think I might be on the verge of asking a question that can't be answered.frank

    It is my understanding that analog chips are only added to increase efficiency of digital processing, but the foundation remains nominalistically digital. With the addition of analog, it speeds up the original method and is intended to require less energy.

    In order for AI to better understand the world relationally, a major paradigm shift is needed.
  • frank
    16.7k
    It is my understanding that analog chips are only added to increase efficiency of digital processing, but the foundation remains nominalistically digital. With the addition of analog, it speeds up the original method and is intended to require less energy.Mapping the Medium

    I was just talking about AD converters that are used for interfacing with the world. Did you know one of the first ideas for a computer was analog? That's what the op-amp originally was.
  • ENOAH
    929
    we could engineer something like a sympathetic nervous response for an AI. Would it be sentientfrank

    My intuition tells me that could be the tacky superficial replica of a human. Its words, ie thinking would certainly make our words/thinking fall prey to believing it had feelings, like a toddler could be fooled by its toys. But it would be us, not the computer, making that actual leap.

    Nature is natural, machines are artificial, and never the twain shall meet
  • wonderer1
    2.2k
    Nature is natural, machines are artificial, and never the twain shall meetENOAH

    That sounds like dogma. Do you have any reasonining to back it up?
  • ENOAH
    929
    Do you have any reasonining to back it up?wonderer1

    No strong reasoning. Not dogma, hyperbole. Sorry. Did not intend to pass it off as either reasoning or law. If I feel inclined, I might provide more of my reasoning than the admittedly little I already provided in my first post on this thread; but being neither a scientist nor prophet, no doubt it will be lacking, and unsatisfying to you and me both.
    Then why even chime in? Just to suggest a place where someone might start hammering
  • Mapping the Medium
    366
    I was just talking about AD converters that are used for interfacing with the world. Did you know one of the first ideas for a computer was analog? That's what the op-amp originally was.frank
    ,

    Yes, I do know about that. :grin: My work requires that I research the history of information technology.
    Op-amps act as intermediaries, preparing raw data from thermistors, photodiodes, microphones, and strain gauges for the computer to process.

    Charles Sanders Pierce Recognizes that Logical Operations Could be Carried Out by Electrical Switching Circuits : History of Information

    Whenever I hear/read the word "analog" in discussions about technology, I have the urge to clarify how 'analog' is being considered in the discussion.

    Of course, Peirce's life was not long enough (whose is?) to realize his vision of going beyond binary processing calculations. I have picked up that baton and am moving forward with accomplishing that goal. Much of my work is proprietary, so I do not share details online. However, I am actively on the lookout for collaborators who would like to work with me on this.
  • Mapping the Medium
    366
    Op-amps act as intermediaries, preparing raw data from thermistors, photodiodes, microphones, and strain gauges for the computer to process.Mapping the Medium

    Last year, I posted an image of an ADC (analog to digital converter) on another online site with the pun "Look! I just bought nominalism in a box!" :rofl:

    It's interesting to think of op-amps as a perfect symbol of reductionist thinking; powerful, useful, but ultimately simplified models of broader, relational systems. Although practical in many applications, they are limited in their ability to fully represent the emergent and contextual nature of the real world. Because of this, I would hesitate to say that they allow a computer to 'sense' the real world. The op-amp is the 'enabler' (conditioning the signal) of the analog to digital transition, then the ADC breaks the analog continuum into discrete, digital data points.

    The op-amp operates purely in the analog realm, but it conditions the signal by amplifying, filtering, and modifying the analog signal to ensure that it is within the required voltage range and quality required by the ADC.
  • frank
    16.7k
    My work requires that I research the history of information technology.Mapping the Medium

    Cool. Do you know the story of the invention of the step-by-step switch? And do you know whether that was the kind of switch Turing used in his Enigma decoder?
  • frank
    16.7k
    the ADC breaks the analog continuum into discrete, digital data points.Mapping the Medium

    Doesn't the central nervous system also deal with converted information?
  • frank
    16.7k
    It's interesting to think of op-amps as a perfect symbol of reductionist thinking; powerful, useful, but ultimately simplified models of broader, relational systems.Mapping the Medium

    Electro-philosophy. :grin:
  • GrahamJ
    50
    I mean, we could engineer something like a sympathetic nervous response for an AI.frank

    We could. More interestingly, we have. You may have one of the beasts hiding in plain sight on your driveway. A typical modern car (no self-driving or anything fancy) has upwards of 1000 semiconductor chips. They are used for keeping occupants safe, comfortable, entertained, adjusting the engine for efficiency, emission control, and so on. Many of the chips are sensors, for pressures and temperatures (you have cells that do this) accelerometers (like the balance organs in your ears), measuring the chemical concentrations of various chemicals in gases (not totally unlike your nose), microphones, vibration sensors, cameras. The information from these is sent to the central car computer which decides what to do with it.

    Some of what the car is doing is looking after itself. If it detects something wrong it emits alarm calls, and produces distress signals. Beeps and flashing lights. If it detects something very bad it will immobilise the car. Sure it's not as sophisticated as us HUMANS with our GREAT BIG SELF-IMPORTANT SELVES, but it seems kind of like a simple animal to me. Worm? Insect?

    Of course, you can say it only doing this on our behalf. But you can also say that we're just machines for replicating our alleles. Note that if a car is successful in the market place, many copies will be made and new generations of cars will use similar designs. Otherwise, its heritable information will be discarded. Cars are like viruses in this respect: they cannot reproduce themselves but must parasitise something else.

    Would it be sentient then? I think I might be on the verge of asking a question that can't be answered.frank

    Well, wait a few years, and you'll be able to ask your car.
  • Mapping the Medium
    366
    Doesn't the central nervous system also deal with converted information?frank

    Of course, but there is a continuum, so we mustn't think of the central nervous system as a 'part' that can be analyzed as a thing-in-itself. There is cascading of peripheral information that influences our central nervous system too. It doesn't act like a mechanical converter.

    My point being that scaling up binary, simplified, nominalistic models of the world at analog fluidity speed will create a brittle house of cards systemically, of which we will lose control of, and that would definitely not be a good thing. We need to maintain analog cohesion as much as possible by developing relational AI.

    Here is a video explaining what I mean. ...

    I only have a minute, so I'll come back later to respond further.
  • Pierre-Normand
    2.6k
    Sure, when resources are plentiful your goal becomes survival in a social environment, but when resources are scarce, values, loyalties, etc. are thrown out the window in favor of other goals.

    As Jerry Coyne put it,
    "Remember that the currency of selection is not really survival, but successful
    reproduction. Having a fancy tail or a seductive song doesn’t help you survive, but may increase your chances of having offspring—and that’s how these flamboyant traits and behaviors arose. Darwin was the first to recognize this trade-off, and coined the name for the type of selection responsible for sexually dimorphic features: sexual selection. Sexual selection is simply selection that increases an individual’s chance of getting a mate. It’s really just a subset of natural selection, but one that deserves its own chapter because of the unique way it operates and the seemingly nonadaptive adaptations it produces.
    — Jerry Coyne

    I would argue again that if resources are plentiful and the environment is stable, traits like the peacock's tail can evolve. If not, procreation is the last thing on the organism's mind. It takes intelligence to find food or a mate. It takes intelligence to navigate one's environment either natural or social (I would say that social is part of the natural. Everything we do is natural, but that is not saying that what is natural is good or bad. It's just a statement of fact, not a moral statement).
    Harry Hindu

    Evolutionary explanations of the origin the general traits and intellectual abilities of human beings contribute to explaining why those traits and abilities arose on (long) phylogenetic timescales but often are irrelevant to explaining why individual human beings behave in this or that way in specific circumstances, of why specific cultural practices arise within this or that society. I disagree that circumstances of resource scarcity always, or even generally, lead people to act under the instinctual impulses that favor individual fitness.

    In his book If This is a Man (also published under the title Survival in Auschwitz in the U.S.) Primo Levi provides striking examples of abnegation from people who were very severely deprived. But even if it's true that under circumstances of deprivation people can be more driven to pursue goals of self-preservation relative to more impartial or altruistic ones, the point regarding the specific structure of human practical rationality remains. In normal circumstances, where one's survival isn't immediately threatened, exercises of practical rationality and practical deliberation are equally capable of resulting in one's goals being revised in light of considerations that have nothing to do with personal fitness as they do result in merely adjusting means to the pursuit of antecedent goals. Circumstances of extreme deprivation can be conceived as furnishing an impediment to the proper exercise of practical rationality rather than highlighting people's allegedly "true" instinctual goals.
  • Pierre-Normand
    2.6k
    I was talking about Hinton's view, which borrows from Dennett.frank

    Thank you! I will watch the video that you posted in the OP in full before commenting further, which is what I should have done to begin with.
  • frank
    16.7k
    It's not very deep philosophically. :sad:
  • Harry Hindu
    5.2k
    Evolutionary explanations of the origin the general traits and intellectual abilities of human beings contribute to explaining why those traits and abilities arose on (long) phylogenetic timescales but often are irrelevant to explaining why individual human beings behave in this or that way in specific circumstances, of why specific cultural practices arise within this or that society. I disagree that circumstances of resource scarcity always, or even generally, lead people to act under the instinctual impulses that favor individual fitness.Pierre-Normand
    This could be said for any organism with an array of senses that responds in real-time to immediate changes in the environment. The world as a dynamic set of patterns is a selective pressure that enables brains that are more adaptable to changing environments to be the prominent mental trait. Instincts can only take you so far as they are more like general purpose behaviors. Consciousness allows one to fine tune one's behaviors for multiple environments by learning which behaviors work in certain situations and which do not.

    Cultural practices, language, and views of the world are themselves subject to natural selection, as humans are natural outcomes and part of the environment and are selective pressures themselves. New ideas are "mutated" former ideas, or an amalgam of former ideas, and those ideas that are more useful tend to stand the test of time.
  • Pierre-Normand
    2.6k
    Cultural practices, language, and views of the world are themselves subject to natural selection, as humans are natural outcomes and part of the environment and are selective pressures themselves. New ideas are "mutated" former ideas, or an amalgam of former ideas, and those ideas that are more useful tend to stand the test of time.Harry Hindu

    Dawkins also popularised the idea that "memes" (a term that he coined) tend to propagate in proportion to their fitness. Ideas being useful no doubt enhances their "reproductive" fitness. But this concept of memes analogises memes to parasites. What enhances the fitness of a meme needs not enhance the fitness of the individuals who host it anymore than real parasites enhance the fitness of the animals that they infect. Else, they would be symbiotes rather than parasites. One main weakness of the "meme" idea as a way to explain cultural evolution is that human beings aren't passive hosts of memes who pass them on blindly. Cultural practices and common forms of behavior are being refined intelligently by people who reflect about them and adapt them to their specific circumstances. An idea that is useful for me to enact in my own circumstances might be useless or harmful for others to enact in their different circumstances. Practical reason isn't a process whereby one gets infected by the memes within a common pool of ideas that have proven to be the most useful in general. Again, practical rational deliberation about one's particular circumstances and opportunities might indeed involve intelligently adapting the means to pursue a predetermined end, but it can also involve revising those very ends regardless of the effects pursuing them might have on one's biological fitness (or reproductive success).
  • Harry Hindu
    5.2k
    Dawkins also popularised the idea that "memes" (a term that he coined) tend to propagate in proportion to their fitness. Ideas being useful no doubt enhances their "reproductive" fitness. But this concept of memes analogises memes to parasites. What enhances the fitness of a meme needs not enhance the fitness of the individuals who host it anymore than real parasites enhance the fitness of the animals that they infect. Else, they would be symbiotes rather than parasites. One main weakness of the "meme" idea as a way to explain cultural evolution is that human beings aren't passive hosts of memes who pass them on blindly. Cultural practices and common forms of behavior are being refined intelligently by people who reflect about them and adapt them to their specific circumstances. An idea that is useful for me to enact in my own circumstances might be useless or harmful for others to enact in their different circumstances. Practical reason isn't a process whereby one gets infected by the memes within a common pool of ideas that have proven to be the most useful in general. Again, practical rational deliberation about one's particular circumstances and opportunities might indeed involve intelligently adapting the means to pursue a predetermined end, but it can also involve revising those very ends regardless of the effects pursuing them might have on one's biological fitness (or reproductive success).Pierre-Normand
    This isn't much different than how various species have re-purposed certain traits (think of the ostrich's wings), or re-purposing a chair as a weapon.

    New traits can only evolve from existing traits. New ideas can only evolve from prior ideas. New ideas are an amalgam of prior ideas.

    An idea that is useful for you in a circumstance would also be useful for others in similar circumstances. Some birds can use their wings to fly in the air or fly through the water. They are different environments but depending on the trait or idea, it would be useful in similar environments.

    Is every situation the same? No, and that is not my point. My point is that every situation is similar, in some way, to another. The point is do the differences really matter in this particular instance of using some idea, or are they irrelevant?
  • Pierre-Normand
    2.6k
    Is every situation the same? No, and that is not my point. My point is that every situation is similar, in some way, to another. The point is do the differences really matter in this particular instance of using some idea, or are they irrelevant?Harry Hindu

    That may be your point now, but you had also claimed that "[o]ne might say that the ultimate goal is survival and every other goal is a subgoal. Our lives are a path to survival until we ultimately fail." and then supported this claim by quoting evolutionary biologist Jerry Coyne. I have been arguing that human intelligence isn't merely an ability to find intelligent means for enhancing one's fitness. More generally, practical deliberation can just as often result in revising one's hierarchy of ends as it does result in finding different means for achieving them.

    Many people choose to use contraceptive methods on the occasion of particular intimate encounters. They will also put themselves in harm's way to protect others. Those forms of behavior reduces their reproductive fitness (as well as their Hamiltonian "inclusive fitness") but aren't on that account the manifestation of a lack of intelligence. They may very well smartly judge that maximising the frequency of their alleles in future generations isn't of any relevance at all to the demands of their practical situation in light of their (or their culture's) conception of a good life.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.

×
We use cookies and similar methods to recognize visitors and remember their preferences.