• TheMadFool
    13.8k
    AI is a big thing in computer science. They're making headlines and also, for better or for worse, falling short of desired results.

    I'm pro-technology, regardless of consequences for humans - I don't care if machines take over the world (should I?).

    Anyway, I think there's a good reason why AI is falling short of expectations.

    Reason, or refined further, logic, has been and can be replicated on computers. Reason is the only thing that humans distinguish themselves with. We're the leading edge of biological technology, so to speak.

    So, in short, software has already achieved in replicating the human mind. That's great but then we continue to attack the problem of AI, in a very myopic way, through software design - making more and more complex algorithms that attempt to replicate the human mind.

    I think this approach is flawed because we're ignoring a very obvious, necessary, aspect of the problem - hardware (our brains).


    So, in my opinion, if we are to make any real progress with AI, we should give due consideration to hardware. More complex hardware, perhaps mimicking the brain, should be built and turned on. We needn't even load a program. Just wait and see what happens.

    Your views...
  • MikeL
    644
    Hi MadFool, what type of hardware do you think we are lacking that might help the situation? More RAM?
  • TheMadFool
    13.8k
    Perhaps more brain-like in structure.

    Our brains are made of neurons and their language is electrical signals.

    We can easily replicate the electrical signals but copying the brain's physical structure may not be that easy.

    In a very simplistic sense, we could connect some wires together, make some rules for how the signal traverses the system, connect and output device to it and wait to see what happens. May be something interesting...

    The whole software approach to AI seems to have things backwards.
  • MikeL
    644
    Sure to simulate the brain would be great. The problem is we still don't really understand it. We thought we almost had it for a while, but it just kept getting more and more complex. The language of the brain is electrochemical, but there are also emergent patterns that we are detecting within this signaling.

    Locations for complex phenomena in the brain also appear to be more decentralised or diffuse than we imagined. Even the neuron itself is not as clear cut as we had once thought and their networks are mind boggling.

    I thought they were working on neural processors at one stage, but I haven't heard anything about it for almost 10 years or more now.

    I think the software part is most critical for figuring out the logic behind complex cognitive or emotional states - we can always build around it once we know what we want.

    Are you suggesting that we could bypass the coding by letting an organic neural network configure itself so to speak, and we just learn how to train it or affect its development?
  • TheMadFool
    13.8k
    Are you suggesting that we could bypass the coding by letting an organic neural network configure itself so to speak, and we just learn how to train it or affect its development?MikeL

    You say it better than me. I think the software came after the hardware - so it was with biology and likewise in the field of computers.

    And then to approach AI from a purely software angle is, to say the least, overly ambitious. As if we've seen the very best hardware can get.
  • MikeL
    644
    The chicken and the egg? It's an interesting way of thinking about it. I hadn't considered the possibility that the network came first and then the code, but it does make a lot of sense.
  • Efram
    46
    AI is something I invest a lot of time in, but I didn't pursue it formally precisely because the whole field is so unappealing in its current state.

    A big problem (which isn't quite so severe these days, but still persists) is the insistence on breaking everything down into individual parts, with no appreciation of the whole. An example of this is spatial perception; there was so much emphasis on the eyes, before everyone had the epiphany that maybe the brain's understanding of space comes from having a human body that interacts within it - a realisation my teenage self already had many years previously.

    Now there's this obsession with "machine learning" that is just throwing more hardware and optimisations at bad solutions to one particular problem. There's this idea that if we just give today's machine learning algorithms a powerful enough supercomputer on which to run, Skynet will naturally happen as a result. It won't.

    As for why it's like this... perhaps because of the same issues that plague all of science, but it's more pronounced in AI because it's an issue of invention more than exploration.

    Regarding some other points made in the thread:

    It's possible/probable that characteristics of the "architecture" of the brain (parallel processing and such) are significant. I imagine AI of the future being implemented in hardware; it just happens that software is easier to implement and change, whereas experimental hardware would be costly and slow to produce. At least if a given supercomputer fails at AI, they can repurpose it; a failed $250m self-evolving synthetic brain would just end up in a hazmat bin.
  • Rich
    3.2k
    I'm pro-technology, regardless of consequences for humansTheMadFool

    This is very informative and should not be ignored. The OP is willing to do anything without regard to the consequences to human life or a human life. It's this POV becoming more prevalent because of the way science is dehumanizing people? Millions upon millions have been murdered as a result of this POV throughout history.
  • praxis
    6.2k
    Millions upon millions have been murdered as a result of this POV throughout history.Rich

    Just out of curiosity, what exactly is this deadly point of view?

    Themadfool could merely be pointing out the very real possibility that super-human AGI could make our species superfluous.
  • Rich
    3.2k
    It was the Nazi point of view. It's perfectly fine as long as you are in pulverizing side and not the one being pulverized. I think the Nazis killed over 50 million people - with advanced technology. Millions of people died defending themselves from this POV.

    It most definitely can happen again as the OP becomes more acceptable. For me, it is quite disgusting.
  • praxis
    6.2k

    Actually Themadfool's POV is quite different from the Nazi POV. He's basically saying that he doesn't care if his side loses.

    I don't care if machines take over the world (should I?).TheMadFool
  • Rich
    3.2k
    Let's quote the whole disgusting post and POV from the beginning including the part that I quoted.

    Dehumanization has been around as long as slavery has existed. The OP is nothing new and I find all such POVs really despicable. And this is no chemical machines talking. I am a human who simply finds it quite disgusting.
  • TheMadFool
    13.8k
    but it does make a lot of sense.MikeL

    I also think the internet is alive:D

    At least if a given supercomputer fails at AI, they can repurpose it; a failed $250m self-evolving synthetic brain would just end up in a hazmat bin.Efram

    If you look at how the biological mind evolved (hardware-->software) then it seems more sensible to follow the same format. To continue, so doggedly, along the software approach to AI is cumulatively less profitable - It's like trying to make a painting come alive.
  • praxis
    6.2k
    DehumanizationRich

    Automation is literally dehumanization, so I guess you're right.

    I just finished a book called Life 3.0, written by the founder of the Future of Life Institute. It discusses the many trajectories that AGI may take in the future, and how we should do our best now to control the path of progress so that we have a better chance of ending up with the future that we want.
  • Rich
    3.2k
    Automation is literally dehumanization, so I guess you're right.praxis

    Automation are tools. It is the people who peach that humans are not humans, the exact same propaganda used to justify slavery and genocide, that work to dehumanize. It is no accident and had lots of historical precedence.

    Trains, planes, and computers are only tools and do not have the creative ability to dehumanize. Only other humans can do this.
  • praxis
    6.2k
    It is the people who peach that humans are not humans...Rich

    Where are you seeing that in this topic?
  • praxis
    6.2k
    Google DeepMind, using a combination of deep artificial neural networks and reinforcement learning.

  • Rich
    3.2k
    Molecular machinary. Computers. Robots. Chemical reactions. Hard-wired. Hardware. Software. It's all over all the threads concerning life, mind, AI etc.

    In this thread, the OP claims he was pro technology and didn't care what the consequences were for humans.

    It's humans that created slavery. It's humans that commit genocide. And the precursor is always dehumanization. Before the Rwanda genocide (promoted by France and Belgium), the government began to refer to the Tutsi population (a race invented by the Belgiums) as "cockroaches". 136,000 people have been killed in the U.S. by prescription opioids. No one is prosecuted for this mass killing. People have been desensitized by other humans. The motive is always the same.
  • praxis
    6.2k
    Before the Rwanda genocide (promoted by France and Belgium), the government began to refer to the Tutsi population (a race invented by the Belgiums) as "cockroaches.Rich

    Right, so where is this sort of thing happening in this topic? I'm not seeing it.
  • Rich
    3.2k
    In the disgusting OP.

    I'm beginning to think you might agree with it. Do you?
  • praxis
    6.2k


    Let's see...

    AI is a big thing in computer science. They're making headlines and also, for better or for worse, falling short of desired results.TheMadFool

    What are the desired results? A general AI that can achieve any goal that a human can or a super human AGI? Should either of these include a mind with a subjective internal experience or consciouness?

    I'm pro-technology, regardless of consequences for humans - I don't care if machines take over the world (should I?).TheMadFool

    I'm pro-tech. but would prefer that the consequences for humans be beneficial. However I don't believe that technology will save our race. We already have the technology to do a lot of good in the world, but we don't use it to do a lot of good. AGI, which I think will be achieved within a few decades, will give enormous power to those that control it, but unfortunately power corrupts, and enormous power will corrupt enormously.

    TheMadFool goes on to critique AI research without any real justification for doing so.
  • BC
    13.2k
    Our brains are made of neurons and their language is electrical signals.TheMadFool

    Our brains are made of neurons, among other types of cells, and their language is chemical and electrical. Neurons to do not communicate between each other by electrical signals. They use chemistry. Electrical current is used within a neuron. If electrical currents were not complicated enough, chemical transmission makes it all even more complicated.

    Intelligence isn't built into the hardware of humans. A newborn has the hardware and knows just about nothing. As newborns become infants, toddlers, young people, and finally adults the brain continuously changes itself to accommodate everything that is learned.

    How does the brain do this? Among other things, it is directed by DNA. Know of any computers that are under the direction of DNA?
  • BC
    13.2k
    Reason is the only thing that humans distinguish themselves with. We're the leading edge of biological technology,TheMadFool

    Humans distinguish themselves from other species the same way crows, cats, and bees distinguish themselves from other species. We are no more the cutting edge of biological technology than whales and giraffes are.

    Computing machines, on the other had, all work pretty much alike, have similar capacities (given a similar chip set) and do not evolve. When they are no longer useful, they are junked. There is nothing a computer can do to make itself more useful.
  • BC
    13.2k
    DehumanizationRich

    I think the Nazis killed over 50 million people - with advanced technology.Rich

    Maybe they did; it depends how one counts up the total--it could be fewer or more than 50 million. But the point I wanted to insert here is that their technology was ordinary; their organization and murderous will were advanced. Their planes, bombs, guns, bullets, tanks, and gas chambers weren't high tech--at least any more high tech than was available to the allies. In the Soviet Union, they killed Jews by lining them up in front of ditches and shooting them -- not exactly high tech. They killed their Soviet POWs by putting them in fenced in corrals and just leaving them in the open without food or water. Again, not high tech. What the Nazis had in abundance, though, was a highly focused murderous will.

    The Nazis (and Japanese) developed dehumanization to a high level, by taking the approach you earlier identified -- just not caring what happened to people. The Jews (and others) were referenced as "useless eaters".

    The few pieces of high tech in WWII were RADAR, the automated bomb site, and the atom bomb. Ballistic missiles didn't play a huge role in the war. They were too late. So was the jet engine.
  • TheMadFool
    13.8k
    The OP is willing to do anything without regard to the consequences to human life or a human life.Rich

    It's not that humans can assume a higher moral ground here. Look at how we're treating animals and the environment - with total disregard for their welfare. So, my views aren't as bad as you make it out to be.

    Anyway, apologies if my views offend you.

    TheMadFool goes on to critique AI research without any real justification for doing so.praxis

    I'm just wondering whether scientists are holding the wrong side of the bat. Have they even tried something as simple as I've suggested viz. connecting together a bunch of wires with some fixed set of protocols as to how a signal traverses the network and then connect an output device to the network to see what happens? This doesn't sound too expensive to me.

    The brain's architecture surely has something to do with the way our minds are. Also, your example of the child's mind shows that hardware is the prerequisite for any real manifestation of intelligence.
  • MikeL
    644
    I also think the internet is alive:DTheMadFool

    It''s an interesting comparison. It is an incredibly integrated neural net of sorts. I don't believe it is alive but it forces to wonder about the difference between that and the human (or any other animals) mind v brain.

    At least if a given supercomputer fails at AI, they can repurpose it; a failed $250m self-evolving synthetic brain would just end up in a hazmat bin.Efram

    I don't think that's necessarily true. I'm sure some downtown restaurants would be interested.
  • Rich
    3.2k
    It''s an interesting comparison. It is an incredibly integrated neural net of sorts. I don't believe it is alive but it forces to wonder about the difference between that and the human (or any other animals) mind v brain.MikeL

    Humans create networks, networks don't create humans. The ability to create, explore, and learn from creating (evolve) is the essential nature if life. Networks are very simple tools that were create as part of the human activity of creation.

    I'm being serious. Is this so difficult to observe? Does anyone look for a network to share their life with?

    The really big cultural/societal issue we face is the enormous effort being rolled into education, beginning in elementary school, to dehumanize people. It is no accident what is happening. People have to stop being polite about it and just tell the professors the Emperor Has No Clothes. The more people play along, the more they will become fodder for the rich and powerful. Do you think billionaires go around pretending they are robots?
  • praxis
    6.2k
    The really big cultural/societal issue we face is the enormous effort being rolled into education, beginning in elementary school, to dehumanize people.Rich

    If you're taking about rationalization, this is something embedded culturally and not in any way limited to the educational system.
  • Rich
    3.2k
    I agree, but the indoctrination begins very early in school where children and parents are too scared to challenge what is being promoted. The Grade is the axe being held over everyone's head.

    But as adults, we can challenge the game of pretending that we are robots or molecular machinery. We are living, we created this, and we have the choice to change it.
  • Rich
    3.2k
    It's not that humans can assume a higher moral ground here. Look at how we're treating animals and the environment - with total disregard for their welfare. So, my views aren't as bad as you make it out to be.

    Anyway, apologies if my views offend you.
    TheMadFool

    It is one thing to say that we must view life as life and treat it all with appreciation, acknowledging that life requires life to subsist. Different cultures treat this differently.

    However, it is an entirely different thing to equate a hunk of metal with life, and ignoring the consequences to life simply because one is infatuated with a hunk of metal. That hunk of metal will not care for you, share its journey with you, embrace you when you need to feel loved.

    Tools are merely one of many creations of life but life gives us Life.
  • praxis
    6.2k

    Then why don't we change it? Do we even know how to change it?

    Now that I think about it, AI is the epitome of rationalization, with ultimate efficiency and predictability to produce capital gains. And there could be goals that are much worse, goals that employ autonomous weapons, for instance.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment