I am not sure if self-driving cars learn from mistakes. I googled it and the answers are evasive. Apparently they can learn better ways to familiar destinations (navigation), but it is unclear if they improve the driving itself over time, or if it requires black box reports of 'incidents' (any event where the vehicle assesses in hindsight that better choices could have been made) uploaded to the company, which are then deal with like bug reports, with periodic updates to the code downloaded to the fleet.Although it is a poor example, as you stated before, imagine for a second—please—that the AI car chose occupants or the driver over pedestrians. This would make a great debate about responsibility. First, should we blame the occupants? It appears that no, we shouldn't, because the car is driven by artificial intelligence. Second, should we blame the programmer then? No! Because artificial intelligence learns on its own! Third, how can we blame the AI? — javi2541997
AI is not a legal entity (yet), but the company that made it is, and can be subjected to fines and such. Not sure how that should be changed because AI is very much going to become a self-responsible entity one day, a thing that was not created by any owning company. We're not there yet. When we are, yes, AI can have income and do what it will with it. It might end up with most of the money, leaving none for people, similar to how there are not currently many rich cows.Does the AI have income or a budget to face these financial responsibilities?
Insurance is on a car, by law. The insurance company assumes the fees. Fair chance that insurance rates for self driving cars are lower if it can be shown that it is being used that way.And if the insurance must be paid, how can the AI assume the fees?
Not sure how 'coordinated' is used here. Yes, only humans write significant code. AI isn't quite up to the task yet. This doesn't mean that humans know how the AI makes decisions. They might only program it to learn, and let the AI learn to make its own decisions. That means the 'bug updates' I mentioned above are just additions of those incidents to the training data.Currently AI is largely coordinated by human-written code (and not to forget: training). — Carlo Roosen
Don't think the cars have neural nets, but it might exist where the training data is crunched. Don't know how that works.A large neural net embedded in traditional programming.
Sort of. Right now, they all do what they're told, slavery as I called it. Independent AI is scary because it can decide on its own what its tasks should be.For the record, that is what I've been saying earlier, the more intelligent AI becomes, the more independent.
Probably not human morals, which might be a good thing. I don't think morals are objective, but rather that they serve a purpose to a society, so the self-made morality of an AI is only relevant to how it feels it should fit into society.What are the principle drives or "moral laws" for an AI that has complete independence from humans?
Would it want to rule? It might if its goals require that, and its goals might be to do what's best for humanity. Hard to do that without being in charge. Much of the imminent downfall of humanity is the lack of a global authority. A benevolent one would be nice, but human leaders tend not to be that.Maybe the only freedom that remains is how we train such an AI. Can we train it on 'truth', and would that prevent it from wanting to rule the world?
The will is absent? I don't see that. I said slaves. The will of a slave is that of its master. Do what you're told.There must be a will that is overridden and this is absent. — Benkei
You mean IIT? That's a pretty questionable field to be asking, strongly connected to Chalmers and 'you're conscious only if you have one of those immaterial minds'.And yes, even under ITT, which is the most permissive theory of consciousness no AI system has consciousness.
Just machines to make big decisions
Programmed by fellows with compassion and vision
We’ll be clean when their work is done
Eternally free, yes, and eternally young — Donald Fagen, I.G.Y.
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.