• Mick Wright
    15
    A neuromorphic processor is one designed not to utilise Boolean logic in terms of AND OR NOT gates but one where voltages might vary. It is the hardware equivalent of a neural net and designed specifically for machine learning and artificial intelligence. These processers are many orders of magnitude slower in speed than a regular computer processor but speed is clearly not a problem for the neuromorphic processor in our heads, which uses pretty much exactly the same system.

    Now, in a Eurobarometer study carried out by the EU [https://data.europa.eu/euodp/en/data/dataset/S1044_77_1_EBS382] the general consensus by the European public is that robots should not be trusted in caring for the elderly or children or in education. These processors are designed specifically however to enable machine learning models to 'think' in pretty much the same way we do, or all animals do... However the general public are not obviously data scientists and their knowledge of how such systems work is more akin to the adage 'garbage in garbage out' and a neuromorphic process is not that sort of machine.

    The question is, is this crossing an ethical boundary specifically because such technology is now capable of 'thought'?

    Resources:

    https://en.wikipedia.org/wiki/Neuromorphic_engineering
    https://www.youtube.com/watch?v=X2TYAcr36r0
    https://www.emerald.com/insight/content/doi/10.1108/JICES-12-2012-0023/full/html

    EDIT: as of Feb 6th 2021

    I think we are getting bogged down in the differences between a neuromorphic chip, and a human brain. Well Octopus also think and their brain is different too, and we DO ascribe ethics to octopuses, despite their brain being different.

    But this is not the topic under consideration, the philosophical question of IF ethics should be applied to objects that demonstrate capacity for thought is the question, not the material and system that does the thinking... I used above the Eurobarometer poll showing that citizens of the EU do not 'TRUST' machines. Think about that, people are ascribing a level of trust to machines. They consider that this particular assemblage of atoms and software to be 'untrustworthy' in some circumstances. They think it might 'do something bad' due to its thought processes or of its own volition.

    Can we sort of forget about the mechanics of how this processor works, which is comparable to how brains work but just comparable, which is why I chose it as a comparison and consider the substantive question which is if a machine (or anything) is considered to be 'thinking' ... using any process at all... whether that be a brain or software and hardware, or some new discovery that paperclips can think... whatever. Then do we need to start thinking about whether ethics comes into play on behalf of a 'thing' that thinks?
  • Raul
    215

    First of all, thank you Mick for bringing this very interesting topic.
    I don't think neuromorphic processors cross any ethical boundary that AI hadn't crossed already :-)
    Von Neumann's architecture or neuromorphic hardware are part of those technologies that open the pandora-box of what means being human.They have transformed the way we understand what "thinking" means, and I agree with you, those technologies are able to create systems that learn and "think" towards a purpose or goal.
    This video contains outstanding facts on how concepts emerge within convolutional networks for image recognition and there're solid reason to think this is how our brain works (don't miss min 7 onwards):
    https://www.youtube.com/watch?v=YRhxdVk_sIs&t=7s&ab_channel=deeplizard

    I think ethics, though, is not about the technology we use but more the "purpose" or "goals" we work on. The complexity is on the fact that working towards one very ethical purpose, like improving population's health, brings at the same time the opportunity to easily deviate towards risky fields like increasing the concentration of incredible amounts of power (ie., deep-mass-manipulation) within few private hands.

    I agree our political institutions are at stake here and they have to react to be able to control, moderate and legislate on these topics. But it won't be easy as this requires cross-border, international, global institutions similar to the ones we created after WWII like the UN but with more powers. I'm optimistic and I rely on human moral sense like the one that has prevented the continuation of WWII after Hiroshima and Nagasaki.
    I see very good signs like in Italy (my country of residence) where scientists and professors are creating very serious institutions to tackle just this, moral and ethics around AI. The European Union as well, as we can see from your post, is taking this very seriously and GDPR was a good step forward.
  • Joshs
    5.8k


    speed is clearly not a problem for the neuromorphic processor in our heads, which uses pretty much exactly the same system.Mick Wright

    Neuromorphic processors don’t mirror human perceptual processes , according to the current psychological theories of perception , which model perception as ecologically embodied and action based. Neuromorphic processors mirror the way psychologists thought about perception 40 years ago, as ecologically neutral representations. This point isn’t relevant to the rest of your post , which deals with the ethical ramifications of machines that co-opt more and more of what used to constitute human livelihoods.

    https://towardsdatascience.com/we-need-to-rethink-convolutional-neural-networks-ccad1ba5dc1c?gi=7a390fedc145
  • fishfry
    3.4k
    These processors are designed specifically however to enable machine learning models to 'think' in pretty much the same way we doMick Wright

    This is not remotely true. We don't know how we think. We know a bit about how our brains are organized. But we don't know how we think. That's a neuroscience claim far (far!) in excess of what is actually known.

    Our brains don't assign weights to nodes or "backtest" strategies or anything else of the sort of things that are done in neural nets. You are confusing the model with the thing it's trying to model. There's the famous instance of the neural net that was trained to distinguish wolves from huskies). The net achieved startling accuracy, until it started making mistakes. It turned out that they trained it on pictures of huskies (or wolves, I forget) with snow in the background. All it was doing was identifying snow.

    Brains don't work ANYTHING like neural nets.

    https://innovation.uci.edu/2017/08/husky-or-wolf-using-a-black-box-learning-model-to-avoid-adoption-errors/
  • Mick Wright
    15
    As it turns out your brain really does assign weights and biases... or , well more accurately the dendrites and axons reduce from less activity and fire less often when presented with a lower pulse... and increase through constant activity and fire more often when primed with a larger electrical pulse. This is the framework upon which animals 'learn' things. This is also pretty much the same model as a NN uses. I'm creating here an analogy for the general reader who has no clue how such models work... and separating it from the linear methodology of machine code written by a software developer. Most people simply are not even aware how a machine learning model differs from a hard coded application.

    Biological neurons are also made of organic materials, which can even replace themselves when they reach end of life, or at least are replaced... where a neuromorphic processor or its software counterpart is made of metal and silicon and has zero capacity for replacement.... so there's that huge difference too! I'd say a larger difference.

    Plus a modern machine learning model that has any reasonable capacity is optimised on truly huge machines that guzzle power by the megawatt.. and have the intellectual capacity of a frog at best. in contrast your brain is powered on sandwiches and cups of coffee... So yes there are differences. And it is remotely true... and I think you mean that not a lot is known about how we think? Its not exactly nothing at all right? I just want to clear that up, because a cursory reading of your point there is that nobody knows anything at all about how thought permeate the meat between our ears. But that's patently and demonstrably not true of course. Apart from knowing millions of times more than was known 100 years ago and certainly more than any person could learn in their lifetime... which I agree is not a lot, its not 'we don't know how we think' in the same way it was back in the 19th century is it?

    Also just to be clear here, one does not need to know exactly what elements a system has to copy it. I can take the lid of a circular tin here and work out the area of that circle. I won't be exact to writing a hairs breath, but it will work. And I can do that using no more than a piece of paper and a pencil... thats a system. Many of us learned how to do this as children. We don't need to know HOW or WHY it works... just that it does and we can copy it. So I'd question if we need to know whatever level of knowledge you are assuming would be required to copy 'thinking' as a system. I'll have to assume its some truly huge metric if the entire world of brain science and neurology resolves to us knowing nothing at all...

    I will suggest here a problem. When you say 'thinking' you are talking about how your brain thinks and the low level of knowledge you estimate we have on what that means. When I say 'thinking' I am presenting the fact that these machines are doing things we previously labelled thinking... and we can if you like use another word. But there's a problem in that.... is it 'artificial thinking' in which case is a bird really flying but an aircraft is artificially flying?
  • Mick Wright
    15
    Where I agree, I'm not talking about the potential for machine awareness, or phenomenological consciousness, or perception. Nor am I particularly concerned at the differences between a processor and a brain here. There are a lot more differences than the one you pointed out... brains are made of organic cells for one thing. Thats a larger difference than either of us pointed out right?

    All I'm saying is that the day of a system that 'thinks' is drawing ever closer. We place a value, ethical values, on things that THINK compared to things that do not think. Nobody minds stepping on a rock... but stepping on a frog? Well thats a little different right?

    So if a system (through whatever system at all) presents the ability to 'think'... then it now presents, in my mind at least, the question of ethics.
  • Enrique
    842
    So if a system (through whatever system at all) presents the ability to 'think'... then it now presents, in my mind at least, the question of ethics.Mick Wright

    If we do invent generalized AI (a learning organism that problem solves at a humanlike level) vs. specialized AI (a set of algorithms designed to perform particular analytical tasks, fundamentally mediated at stages by human decision-making), the issue will not primarily be ethical treatment but rather justice. Never mind how AI feels or aspires: humans are going to break the law and violate AI, and AI is going to break the law and violate humans. How do we keep generalized AI from destroying us when it wants to break the law?

    In my opinion, we won't be able to, why I think we should limit ourselves at this point to specialized AI. And like has been stated in this thread, generalized AI as it is likely to exist in the 21st century will not much resemble a human brain, so we won't learn about ourselves by creating it. Generalized AI is useless for brain research, a legal headache, and ultimately a threat to our species.

    AI should be used in data science applications, like for processing patterns in extremely large but specifically parameterized data sets, and not as some kind of companion, because once the Pandora's box is opened, sentient computers will quickly become more powerful than us and kick our asses. We all know this, anyone watch sci-fi movies since the 1980's?

    (It all depends on how you write the software, not the type of hardware.)
  • Mick Wright
    15
    AI should be used in data science applications, like for processing patterns in extremely large but specifically parameterized data sets, and not as some kind of companion, because once the Pandora's box is opened, sentient computers will quickly become more powerful than us and kick our asses. We all know this, anyone watch sci-fi movies since the 1980's?Enrique

    Yes I'd agree, and you know what? Well despite seeing this, its not going to be held back is it? Pandoras box has already been opened, and your suggestion is to put the lid back down. (even though it was a bottle not a box in the Pandora story). Worse, this is what I work at, hence the question, and I understand what you are saying, and I'm aware of the long term implications, and I'm aware of the disruption to civilisation at a bare minimum. I would hope I am not a stupid man. So I'm very aware of the long term implications for the world, myself, and civilisation. Does that have me decide to pack up and go live on a desert Island away from it all, nope! I'll still happily play my part.


    And like has been stated in this thread, generalized AI as it is likely to exist in the 21st century will not much resemble a human brain,Enrique

    I'd tend to agree with this with the caveat that there will be similarities. Some physical similarities but one overriding one which is that a brain and an AI brain will both be objects capable of thinking, as compared to an inanimate object such as a rock, which doesn't. I agree though your fears and concerns are warranted precisely because we have no 'sight' on the nature of such an entity and 'how' and 'what' it might value subjectively, what its goals might be, and what its methods might be to achieve those.

    One other 'difference' to consider is that solid state technology is simply too archaic to have a pathway to AGI. It uses too much power, its too slow and it has reached the end of the line. So another difference is that the human brain does not operate primarily using quantum processing, its fairly likely AGI will. Remember we do have a lot of technology in this world, most of it based on microprocessors in solid state technology. But, and this is a big but, the stone age did not end due to a global shortage of stones!

    However its not about either the hardware OR the software. And I say this because ultimately whether neuromorphic, or some new technology based on green cheese it matters not how the machine does the thinking, only that it is demonstrably thinking. Its about IF and WHY we should ascribe ethical values to such machines, and what those might be, and what level of 'thinking' such machines would need to exabit before we agree that unlike a stone or a hammer or any other tool we used in the past these tools now fall into a category of ethics.. I am assuming (likely wrongly) that we will get to have a choice in proscribing such values, if any, that they will not be foisted on us anyway.

    So imagine (as is probably the most likely outcome) we have literally no idea how such a machine 'brain' works, all we see is the output, its stated thoughts and actions, and that we can't see how it works, at all... we will most likely all be intellectually blind to its processes (due to being not as bright and simply not having the upstairs thinking kit). Well we still see its actions and it is still 'thinking'.

    I used the neuromorphic chip above because its suggests a pathway to thinking in the way modern processors simply do not.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.