• Raul
    215
    Think about an AI like a baby, capable of learning. We teach the AI what is the content of an image as we do with our children and the AI learns and remembers what we teach it the same way a baby does. We talk to the AI and the AI learns how to talk.
    I haven't seen an AI being able to build a "model of the world" the size we humans do yet , but I think it is reasonable to think they will do it sooner or later.
    So AI learns what we teach them. We do embody the AI into a robot and here we're, a Philip K Dick replicant!
    What is missing? What we,humans, cannot accept or imagine that this artificial intelligence can do?
    Isn't it that this transhuman, artificial epistemology machine is going to redefine what we belief we are?
  • Pantagruel
    3.4k
    I just wish Alexa could follow what I say and do what I ask more than half the time. Maybe AI is making some time in big labs, but practical, interactive AI is a long way from where I'm sitting now.
  • Olivier5
    6.2k
    I just wish Alexa could follow what I say and do what I ask more than half the time. Maybe AI is making some time in big labs, but practical, interactive AI is a long way from where I'm sitting now.Pantagruel

    Artificial intelligence is not there yet, agreed, but they are already reaching the level of artificial dumbness.
  • Pantagruel
    3.4k
    Artificial intelligence is not there yet, agreed, but they are already reaching the level of artificial dumbness.Olivier5

    :lol:
  • fishfry
    3.4k
    We teach the AI what is the content of an image as we do with our childrenRaul

    Not true as I understand it. We do not weight our childrens' brain nodes and backtest the weights repeatedly until we get good results. Current approaches to ML aren't anything like how brains work. We don't even know how brains work. This kind of misunderstanding is very prevalent and is a real hindrance to understanding both brains and AI.

    Artificial intelligence is not there yet, agreed, but they are already reaching the level of artificial dumbness.Olivier5

    Yes I've had the same thought. "Intelligence" means two different things. Humans are intelligent in ways sea slugs are not. On the other hand we say some people are highly intelligent and others not. These are two different meanings of the word. What if someday we build a true, self-aware general AI and it's as bright as Barney Fife?
  • Raul
    215
    artificial dumbnessOlivier5

    :lol:
  • Raul
    215
    This kind of misunderstanding is very prevalent and is a real hindrance to understanding both brains and AIfishfry

    I agree it is not working the same way, the hardware-software architecture and the biology are different. But in terms of how information is treated (you see an eye, I tell you it is an eye so you recalibrate your networks to get closer to the recognition of the eye) it is very similar.
    And when you look at what happened inside a CNN after it has learned you see that inside the CNN there have emerged similar "concepts" to the ones we have in our brains for visual recognition (i.e. group of neurons that activate for vertical or diagonal lines, then higher level groups of neurons for higher level concepts like nose, eye...).
    There is something fundamentally similar between AI and how our brain works.

    We don't even know how brains workfishfry

    But we're getting very very close...
    https://www.youtube.com/watch?v=MSy685vNqYk&ab_channel=PeterWallInstituteforAdvancedStudies
  • fishfry
    3.4k
    There is something fundamentally similar between AI and how our brain works.Raul

    I just disagreed with you about that. Do I have to say it again then you send me another Youtube video and I re-reiterate my disagreement? I'm not uninformed about how neural nets work and on record that brains don't work like that.
  • Raul
    215
    I just disagreed with you about that.fishfry

    We never know who is on the other side so I share arguments materials I find very interesting and formative. This is one of the nice things of this forum.
    If you think you know enough, is ok, we agree that we disagree and that's all.
  • LuckyR
    536
    What is missing? What we,humans, cannot accept or imagine that this artificial intelligence can do?
    Isn't it that this transhuman, artificial epistemology machine is going to redefine what we belief we are?

    What is missing is the subjective part of the human interaction. Not dissimilar how watching a movie that features an actor is not the same as interacting with the actor, regardless if the movie is in High Def or not.
  • Olivier5
    6.2k
    There is something fundamentally similar between AI and how our brain worksRaul

    That would because we try to reproduce on silicone stuff that happen in the brain. So after decades we've made some (modest) progress.
  • tim wood
    9.3k
    AI like a baby, capable of learning.Raul
    Chess engines have been there for a while. And there's Watson. Maybe intuition, which I suspect cannot be directly programmed, is nevertheless near in the way of AI accomplishments. Hmm.
  • Jack Cummins
    5.3k

    I think that steampunk fiction has emerged as the new offshoot where cyberpunk was going. So, perhaps we will see people with steam engines attached and clock parts, not just robots, walking the streets of our post apocalyptic future.

    But on a serious level, I believe that eugenics is already on it's way.
  • Raul
    215
    intuition, which I suspect cannot be directly programmedtim wood

    Try to define intuition and you will see it is overestimated. Ask this question to Ke Jie, the champion of Go that lost against AlphaGo AI. Many asian people could tell you AI was having intuitions and imagination to win against Ke Jie...
  • Raul
    215
    I think that steampunk fiction has emerged as the new offshoot where cyberpunk was going. So, perhaps we will see people with steam engines attached and clock parts, not just robots, walking the streets of our post apocalyptic future.Jack Cummins

    Yeap a kind of MadMax, instead of a Blade Runner :rofl:

    eugenics is already on it's way.Jack Cummins

    Right, I would say the worst of it risks of getting back and reinforced by the latest bio-technologies.
  • Raul
    215
    That would because we try to reproduce on silicone stuff that happen in the brain. So after decades we've made some (modest) progress.Olivier5

    And maybe that the key is on how we manage information... and not that much the physical substrate.
  • Wayfarer
    22.9k
    Think about an AI like a baby, capable of learning. We teach the AI what is the content of an image as we do with our children and the AI learns and remembers what we teach it the same way a baby does. We talk to the AI and the AI learns how to talk.Raul

    I have worked in AI and use it every day, and this is not an accurate depiction of AI. Human beings have the capacities they do as the result of millions of years of evolution, and tens of thousands of years of cultural development. Computer networks comprise billions of microprocessors coordinated by algorithms to deliver results. They're different in fundamental ways - first and foremost, in the difference between devices and beings.

    Sure, AI is capable of learning. Actually I have an anecdote about that. At an AI company I worked for, I was given a set of supermarket data to work with to understand how 'she' worked. There was data for families, singles, single parents. I asked 'is there any data for bachelors?' A moment's silence. 'Is "bachelor" a type of commodity "olive"?' came the reply. 'Yes', I said, for a joke. 'OK, I'll remember that, she said. (Of course, this error would be corrected later, and the fact that the system was willing to take a guess was really impressive. But the kind of errors these systems make are also revealing of something. The following is excerpted from an essay written in 2005 but makes a similar kind of point:

    Computers could outstrip any philosopher or mathematician in marching mechanically through a programmed set of logical maneuvers, but this was only because philosophers and mathematicians — and the smallest child — were too smart for their intelligence to be invested in such maneuvers. The same goes for a dog. “It is much easier,” observed AI pioneer Terry Winograd, “to write a program to carry out abstruse formal operations than to encode the common sense of a dog.”

    A dog knows, through its own sort of common sense, that it cannot leap over a house in order to reach its master. It presumably knows this as the directly given meaning of houses and leaps — a meaning it experiences all the way down into its muscles and bones. As for you and me, we know, perhaps without ever having thought about it, that a person cannot be in two places at once. We know (to extract a few examples from the literature of cognitive science) that there is no football stadium on the train to Seattle, that giraffes do not wear hats and underwear, and that a book can aid us in propping up a slide projector but a sirloin steak probably isn’t appropriate.
    — Steve Talbott

    Logic, DNA and Poetry.
  • Olivier5
    6.2k
    and not that much the physical substrate.Raul

    The substrate likely constrains what the system can and cannot do. Like, we can think in terms of fuzy sets: sets that have a fuzy limit, where it is not quite clear if one is already in the set or out of it. In other words, human logic accepts borderline cases where the door is neither open nor close. Silicone is less good at doing this, apparently. It has to be a 0 or a 1, there's no middle ground there.
  • Raul
    215
    I have worked in AI and use it every day, and this is not an accurate depiction of AI.Wayfarer

    Yes, of course it is not accurate, it is not even a description but a valid example to illustrate my idea that AI learn and treat information in a similar way our brain does. It was not my purpose to define it. I started working with AI 20 years ago... I know what I'm talking about, I think :wink:

    When you look inside CNNs for visual recognition that have learned already and you see how they have organized into groups of artificial neurons sensitive to different aspects of the vision (edges, angles) and that they have as well created a hierarchy for higher level of concepts like a nose or an eye you realize there is something super-powerful there. It is basically the way our visual cortex works.

    A dog knows, through its own sort of common sense, that it cannot leap over a house in order to reach its master. It presumably knows this as the directly given meaning of houses and leaps — Steve Talbott

    Nowadays AI is still limited because we apply it to specific functionalities that are immediately useful for us but an AI can potentially get a model-of-the-world similar to ours and then the idea of this article will not be true anymore as those General AI would manipulate and generate abstract concepts the same way we do...
  • Raul
    215
    The substrate likely constrains what the system can and cannot do.Olivier5

    Substrate implies limitations but I think those limitations are more quantitative (like speed of processing) than qualitative. You can simulate in traditional computers probabilities and randomness that can simulate what you say, cases where the door is neither open or close.
    Do you think Qbits allow new types of functionalities vs traditional bits? I think is only about more power in terms of more speed that then makes certain types of problems solvable within human-time scale.
    But if you have examples of new qualities, new capabilities I would be very interested. Quantum computers are still hard to grasp for me.
  • Pantagruel
    3.4k
    Not true as I understand it. We do not weight our childrens' brain nodes and backtest the weights repeatedly until we get good results. Current approaches to ML aren't anything like how brains work. We don't even know how brains work. This kind of misunderstanding is very prevalent and is a real hindrance to understanding both brains and AI.fishfry

    Yes, and neural network simulations operate at a conceptual level. Simulators that actually emulate the transmission of neuro-chemical waves in the brain have been tried, but they are much less efficient.
  • tim wood
    9.3k
    Try to define intuition and you will see it is overestimated. Ask this question to Ke Jie, the champion of Go that lost against AlphaGo AI. Many asian people could tell you AI was having intuitions and imagination to win against Ke Jie...Raul
    Yes, but as noted probably not directly programmable/codable. I suspect it's not intuition at all, but a set of heuristics run through very fast. Do people do the same thing? I do not know.
  • Raul
    215
    Do people do the same thing? I do not know.tim wood

    Right, I honestly think none of us know...
    but looking at the power of unconsciousness and how it cheats us building our consciousness, sometimes it makes us belief it is our idea, others that is like an intuition, others the ideas are then when we wake up... it is all the same. I belief heuristics + CNN-like networks are at the core of our reasoning, of course, all this correlated into biological neural networks and our central nervous system.
  • Olivier5
    6.2k
    But if you have examples of new qualities, new capabilities I would be very interested. Quantum computers are still hard to grasp for me.Raul

    You're asking the wrong guy. The only thing I've heard is that encoding algorithms into quanta and extracting the solution from the quantic level represent significant difficulties...

    You can simulate in traditional computers probabilities and randomness that can simulate what you say, cases where the door is neither open or close.Raul

    That's true, although it may take a lot of 0 and 1 to decently replicate / map a fuzzy set, with brute electronic force it's doable.

    Therefore, it ought to be possible to emulate a human brain on silicon but the question becomes indeed a quantitative one: How many transistors is a neuron worth? And how big a machine do you need to simulate a human brain? One of these big Craig? 2?, 10?...

    I guess the answer will depend on the quality of the replication, like the mp3 format requires less bits per second of encoded music than other formats (due to data compression techniques) but it is less faithful to the originally recorded music than larger formats. Mp3 lacks in treble and bass for instance. Likewise, a crude replication of a human brain may require much less computation power than a finer, more complete and nuanced imitation.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.