• Shawn
    12.6k
    It may not be common knowledge nowadays; but, some of the most modern and sophisticated computers are built via neural-networks collected from human writing, speech, and even behavior.

    More specifically, human thought has been researched even with advanced brain interface devices such as brain activity monitoring devices.

    Where else can you go towards than reaching out for information about how consciousness works rather than human beings? Was there any other way?

    However, one has to contemplate about how to remove bias from these computers since they are such a facet of human thought and behavior?

    When analyzing trends in the world with such tendencies of frictions between nations over dominance and control, one has to wonder if these nations are professing non-biased reasoning.

    Do you think these computers are de-biased or can ever be?
  • fishfry
    2.6k
    It may not be common knowledge nowadays; but, some of the most modern and sophisticated computers are built via neural-networks collected from human writing, speech, and even behavior.Shawn

    Just a quibble with this statement as it's not relevant to the rest of your subject. But at this point I believe it's fairly common knowledge. I'm pretty sure the average person on the street, or at least the average scientifically literate person, has heard the terms machine learning, AI, neural networks, and the like.

    A second point I'd make here is that neural nets run on conventional computing hardware and are essentially Turing machines. That is, neural nets are a clever way of organizing a program; but they are NOT a new or revolutionary mode of computation. There is in fact only one known mode of computation, the TM. That fact has not changed since Turing's 1936 paper. Also note that the same remarks pertain to quantum computing, since in fact quantum algorithms can be simulated on conventional hardware. They just run more slowly. That's an issue of computational complexity and not computability, and important distinction. We could run Shor's algorithm with pencil and paper, given enough time, pencils and paper.

    More specifically, human thought has been researched even with advanced brain interface devices such as brain activity monitoring devices.Shawn

    That's pretty cutting edge and I'm definitely keeping an eye on the developments. It's more likely such technology will be used to assist disabled people to see, walk, etc. rather than enable us to think of a Google search and have Google pump the answer into our heads, along with some highly targeted ads.

    Brave new world indeed!

    Where else can you go towards than reaching out for information about how consciousness works rather than human beings? Was there any other way?Shawn

    If machines ever achieve true general AI, meaning actual sentience implemented in a non-human substrate, I predict it will NOT be by means of the current approaches to machine learning. ML is just datamining on steroids. It tells you everything there is to know about what's happened, and nothing at all about what's happening.

    However, one has to contemplate about how to remove bias from these computers since they are such a facet of human thought and behavior?Shawn

    The algorithms are written by humans. All the works of humanity are flawed. You know (off topic entirely) I wanted to say all the works of man, but that's sexist, so I have to say all the works of humanity. It doesn't scan as well, not at all. These are the times that try people's souls, not that people have souls in our secular age.

    Anyhoo. All algorithms are biased by the biases of the designers and programmers who implemented them. ML is even worse, because ML depends entirely on its training dataset. There's a famous example where they trained an AI to distinguish between huskies and wolves, two animals that are hard to tell apart. The AI did spectacularly well but failed on some samples. They dug into the code and found that most of their pictures of wolves (or huskies, I forget which) were taken in snowy environments. The program had learned to identify snowy landscapes, not huskies and wolves!

    https://unbabel.com/blog/artificial-intelligence-fails/

    You will never eliminate this kind of bias from ML systems precisely because that's how they're designed to work. You train them on a dataset. Who choses the dataset? What biases did those people have? What accidental biases are in the data that nobody thought of? How can you ever tell you're correlating the right thing?

    These problems will never go away. But of course every technology has downsides. 3000 Americans a day die in car crashes. Every single day, 24/7/365, another 9/11's worth of death. Yet nobody blinks an eye or gives it a moment's thought when they hop in their car for a run to the grocery store.

    In the future a certain number of people will have their lives ruined by machine error and we'll just go, "That's just terrible, something must be done." And deep down, not even consciously, we'll say to ourselves, Better him than me.

    I'm glad I'm old.
  • Shawn
    12.6k


    I'm pretty sure that bias has been operating in the background. I fathom that in the near future with all the brain datasets from companies that utilize neural arrays or even basic brainwaves to devices that enable spinal cord injured to walk again will be producing a vast array of data to work on.

    My concern is mostly the trends that have been governing companies and government agencies in making decisions on differing scenarios or political posturing throughout the past 10-20 years or since the .com boom. It seems to me that if bias is such a conundrum, in regards to dominant trends in political behavior, then what ought one do to remove it, as the 'ought' is justified, at least I think so?

    What I specifically mention is reducing the chance of making erroneous decisions in between tense confrontations such as military warfare or even the US and Russian front.

    All that you have said is cause for concern on this topic, specifically, don't you think?
  • fishfry
    2.6k
    What I specifically mention is reducing the chance of making erroneous decisions in between tense confrontations such as military warfare or even the US and Russian front.

    All that you have said is cause for concern on this topic, specifically, don't you think?
    Shawn

    The world blundered blindfolded into World War I. I've seen people say that global conditions resemble Europe before the Great War. It would only take one small screwup. And plenty of people are stoking the flames. I'm certainly concerned. But in this case the errors of judgment, the bravado, and the stupidity are all sadly human. We can't even blame the machines.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.