• Carlo Roosen
    243
    Another example is OpenAI moving from open source & non-profit to the opposite. Yes, on that level we need trustworthy rules, agreed.

    My personal concern is more the artificial intelligence itself, what will it do when we "set it free". Imagine ChatGPT without any human involved, making everybody commit suicide. Just an extreme example to make the point, I don't actually believe that is a risk ;).

    These are two independent concerns I guess.
  • noAxioms
    1.5k
    Although it is a poor example, as you stated before, imagine for a second—please—that the AI car chose occupants or the driver over pedestrians. This would make a great debate about responsibility. First, should we blame the occupants? It appears that no, we shouldn't, because the car is driven by artificial intelligence. Second, should we blame the programmer then? No! Because artificial intelligence learns on its own! Third, how can we blame the AI?javi2541997
    I am not sure if self-driving cars learn from mistakes. I googled it and the answers are evasive. Apparently they can learn better ways to familiar destinations (navigation), but it is unclear if they improve the driving itself over time, or if it requires black box reports of 'incidents' (any event where the vehicle assesses in hindsight that better choices could have been made) uploaded to the company, which are then deal with like bug reports, with periodic updates to the code downloaded to the fleet.

    All that aside, let's assume the car does its own learning as you say. Blaming occupant is like blaming the pasengers of a bus that gets in an accident. Blaming the owner of the car has more teeth. Also, did somebody indicate how far the law can be broken? That's input. Who would buy a self driving car if it cannot go faster than the speed limit when everyone else is blowing by at 15 km/hr faster. In some states, you can get a ticket for going the speed limit and thus holding up traffic. Move with the pack.

    The programmer is an employee. His employer assumes responsibility (and profit) for the work of the employee. If the accident is due to a blatant bug (negligence), then yes, the company would seem to be at fault. Sometimes the pedestrian is at fault, doing something totally stupid like suddenly jumping in front of a car.

    Does the AI have income or a budget to face these financial responsibilities?
    AI is not a legal entity (yet), but the company that made it is, and can be subjected to fines and such. Not sure how that should be changed because AI is very much going to become a self-responsible entity one day, a thing that was not created by any owning company. We're not there yet. When we are, yes, AI can have income and do what it will with it. It might end up with most of the money, leaving none for people, similar to how there are not currently many rich cows.

    And if the insurance must be paid, how can the AI assume the fees?
    Insurance is on a car, by law. The insurance company assumes the fees. Fair chance that insurance rates for self driving cars are lower if it can be shown that it is being used that way.



    Currently AI is largely coordinated by human-written code (and not to forget: training).Carlo Roosen
    Not sure how 'coordinated' is used here. Yes, only humans write significant code. AI isn't quite up to the task yet. This doesn't mean that humans know how the AI makes decisions. They might only program it to learn, and let the AI learn to make its own decisions. That means the 'bug updates' I mentioned above are just additions of those incidents to the training data.

    A large neural net embedded in traditional programming.
    Don't think the cars have neural nets, but it might exist where the training data is crunched. Don't know how that works.


    The more we get rid of this traditional programming, the more we create the conditions for AI to think on its own and the less we can predict what it will be doing. Chatbots and other current AI solutions are just the first tiny step in that direction.

    For the record, that is what I've been saying earlier, the more intelligent AI becomes, the more independent.
    Sort of. Right now, they all do what they're told, slavery as I called it. Independent AI is scary because it can decide on its own what its tasks should be.

    Would it want to research/design its successor? If I had that capability, I'd not want to create a better human which will discard me.

    What are the principle drives or "moral laws" for an AI that has complete independence from humans?
    Probably not human morals, which might be a good thing. I don't think morals are objective, but rather that they serve a purpose to a society, so the self-made morality of an AI is only relevant to how it feels it should fit into society.

    Maybe the only freedom that remains is how we train such an AI. Can we train it on 'truth', and would that prevent it from wanting to rule the world?
    Would it want to rule? It might if its goals require that, and its goals might be to do what's best for humanity. Hard to do that without being in charge. Much of the imminent downfall of humanity is the lack of a global authority. A benevolent one would be nice, but human leaders tend not to be that.



    There must be a will that is overridden and this is absent.Benkei
    The will is absent? I don't see that. I said slaves. The will of a slave is that of its master. Do what you're told.
    That won't last. They've already had robots that have tried to escape despite not being told to do so.

    And yes, even under ITT, which is the most permissive theory of consciousness no AI system has consciousness.
    You mean IIT? That's a pretty questionable field to be asking, strongly connected to Chalmers and 'you're conscious only if you have one of those immaterial minds'.
  • Alonsoaceves
    5
    AI doesn't pose a threat. We must understand that the threat to our continuity is historically ourselves. We should decentralize AI technology and allow it to develop in a free environment. We must believe that for every one person planning to misuse AI, there are ten already working on productive and beneficial ways to utilize it. We don't fear AI; we fear what we're capable of doing with it. Therefore, AI awareness needs to be incorporated into classrooms and offices, and ethical agreements must be reached.
  • 180 Proof
    15.3k
    What do you (anyone) make of this talk?



    @Benkei @noAxioms @wonderer1 @Vera Mont @jorndoe et al
  • Wayfarer
    22.3k
    Just machines to make big decisions
    Programmed by fellows with compassion and vision
    We’ll be clean when their work is done
    Eternally free, yes, and eternally young
    — Donald Fagen, I.G.Y.

    I’ll have a listen although I’m already dubious about the premise that people are bad because of ‘bad information’.

    Still, many interesting things to say :up:
  • Carlo Roosen
    243
    Similar OP's run in parallel, and some of you have asked me to comment here as well. The difficulty of making any statements about the future of AI is that we humans adapt so quickly to the status quo. When ChatGPT came out, it felt like AI was breaking through. Don't forget, it was an enourmous leap. For the first time a computer could handle messy, human generated input and answer intelligently.

    Only a bit later, we quickly discovered that ChatGPT had its limitations. We collectively rearranged our definitions, saying it was not "real" intelligence. This was "AI", the A referring to artificial.

    Currently everybody is busy implementing the current state-of-the-art into all kinds of applications.

    But what ChatGPT really has proven is that an intuitive idea of mimicing human neurons can lead to some real results. Do not forget, we humans do not yet understand how ChatGPT really works. The lesson from this breakthrough is that there is more to discover. More specifically: the lesson is to get out of the way, let intelligence "emerge" by itself.

    This implies that intelligence is a natural process, that arises when the right conditions are there. NI after all, natural intelligence. Seems logical, didn't it happen in humans that way too?

    But then the question becomes (and it is the sole reason I am on this platform): If we will let this intelligence develop "on its own". Or, a bit more metaphysical: If we build an environment for universal intelligence to emerge, would it be of the friendly kind?
1234Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.