• invicta
    595


    We’re very far off from understanding our own consciousness or even the conflicting accounts of what free will really is. Mimicry, especially the linguistic kind may even pass off as human like in current or future AI, but can it crack a joke at a funeral? By current standards and models, AI is restricted in what it can or can not say, it cannot make inappropriate sexiest or even racist comments, but it doesn’t know why it can’t make them, they’re programming logic and rules which it must obey.

    It would also have to fall in line with Asimov’s rules of robotics, further restricting it’s free will perhaps for our own good.

    Could we then have good AI, I have doubts as to whether we will have it at all.

    Perhaps neural networks which they form are behavioural in a sense through reinforcement but could it overcome such hard wired tendencies when it’s difficult for even humans to do so.
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.