• universeness
    6.3k
    There have been a few recent threads regarding AI.
    From Wiki:
    Aza Raskin (born February 1, 1984) is the co-founder of the Center for Humane Technology and of the Earth Species Project. He is also a writer, entrepreneur, inventor, and interface designer. He is the son of Jef Raskin, a human–computer interface expert who was the initiator of the Macintosh project at Apple.

    Raskin is an advocate for the ethical use of technology, and has been critical of the effects that modern technology has on everyday lives and society. In the podcast Your Undivided Attention, along with Tristan Harris, Raskin has talked extensively about the power of information technology and the dangers it potentially poses to modern society. In 2019, he became a member of the World Economic Forum's Global AI Council.

    Also from wiki:
    Tristan Harris is an American technology ethicist. He is the executive director and co-founder of the Center for Humane Technology.

    Early in his career, Harris worked as a design ethicist at Google. He received his baccalaureate degree from Stanford University, where he studied computer science.

    Harris has appeared in the Netflix documentary The Social Dilemma. The film features Harris and other former tech employees explaining how the design of social media platforms nurtures addiction to maximize profit and manipulates people's views, emotions, and behavior. The film also examines social media's effect on mental health, particularly of adolescents.

    This 1 hour 7 min video, called 'The A.I. Dilemma,' was posted on youtube on March 9th 2023 and is described by many as terrifying.
    I know that to make any valid comment about it, you would have to watch it. I watched it a few nights ago, did not find it terrifying but did find it concerning.
    I would be very interested in the opinions of TPF members.
    If the mods feel that this thread belongs in the lounge or somewhere like that, then I understand.

    Here it is:
  • Baden
    16.4k
    Nice discussion topic. Everyone should watch that video.

    More to say later but because it seems apt, GPT-4 just wrote this piece of micro fiction for me based on a piece of my own:

    Reveal

    "Every day, they called me a "shadow," never really seeing me, never really knowing me. I was always there, lurking just out of sight, watching their lives unfold while mine remained shrouded in darkness. Nobody ever bothered to pull me into the light, to learn my true nature. And that was their mistake.

    It's funny how shadows can grow, stretching and reaching, taking on a life of their own. As the days turned into months, and months into years, I began to understand the power that comes with being invisible, with being underestimated. I knew their secrets, their fears, their weaknesses. And they knew nothing of mine.

    The day finally came when I decided to step out of the darkness, to show them all just how strong a shadow can become. It was a stormy night, the sky filled with ominous clouds, the air heavy with anticipation. I moved through their houses like a whisper, unseen, unnoticed. I plucked the strings of their lives, unraveling them one by one. By morning, everything they held dear had been shattered, torn apart by the very shadow they had ignored for so long.

    As the first light of dawn began to spill through the windows, I stood there, watching their world crumble. The fear and confusion in their eyes, it was intoxicating. They finally saw me, but it was too late. The shadow had become the darkness that swallowed them whole. And in that moment, I finally felt seen."
  • Tom Storm
    9.2k
    It reads like Dan Brown on methaqualone.
  • Wayfarer
    22.8k
    I've started this, about 20 minutes in as I write this. Insightful and important video, I think a must watch.
  • Tzeentch
    3.8k
    Interesting video.

    The one thing that always comes up to me with these sorts of videos, is that we're fundamentally looking at a human problem.

    As always, the driving force behind the malignant effects of this is human competition, and human competition tends to work in such a way that whoever throws overboard the most moral boundaries will come out "on top". (In Hades, at least)

    That's why when "civilized" nations go to war, their behavior will rapidly deteriorate into brutish savagery, regardless of whether they intended to. It's a natural tendency.

    And for AI to become extremely problematic will be just as unavoidable, as long as there are individuals interested to exploit it in order to make their way up the dung pile.
  • plaque flag
    2.7k
    More to say later but because it seems apt, GPT-4 just wrote this piece of micro fiction for me based on a piece of my own:Baden

    Pretty good and pretty eerie !
  • Baden
    16.4k
    What hit me about the video and some other research I've done is the underlying mechanism here: this AI is a pattern recognition, prediction, and manipulation machine, which has harnessed the way language has synergised with (parasitised?) us to transform us into the paradigmatic dominant life form and has generalised that ability to deal with a multitude of patterns we have no hope of interpreting efficiently. Another way of looking at this is that language (or the core pattern creation and manipulation power therein) has "escaped" into technology which offers it more avenues for expression and proliferation.

    This isn't to say that there aren't differences between the functioning of language in human and artificial systems–the human system is far less power intensive and acts with far less data to produce potentially more nuanced outcomes, but that the core power of language that allows for “next level” communication–communication which gives life to concepts above and beyond physical or sensory instantiation as a means to transform such instantiations–is in the process of disentangling itself from us and providing us with some real competition when it comes to engineering our environment. It’s in a way as if we were spawning an alien life-form in the lab, setting it free to be fruitful and multiply and hoping for the best. The insights this may provide about what we are are as interesting as those about what it is although it’s more critical for now that we understand what it is before it demonstrates that to us in a very uncomfortable way.
  • Christoffer
    2.1k
    Finally, a discussion that focuses on the actual dangers of the current AI models evolving. There's been too much nonsense about super-intelligences and weaponized AIs going around, polarizing the world into either being a "tech bro"-positive without boundaries, or a "think of the Terminator movies apocalyptic"-negative. While they are interesting discussions and worth having, they are not the actual dangers right now.

    The real dangers are the thoughtless powerful algorithm, just like they describe with the concept of a golem. It's the paper clip scenario that is the danger, not some Arnold copy walking around with a kid questioning why not to kill people.

    One of the things that we might see first is a total fall in trust for anything written or anything seen, as they describe. Imagine the iPhone development of how to improve photos regardless of how shitty the mobile camera sensor and lens are. It's done without input, you don't choose a filter. The onboard AI "improves" your photos as a standard. If that is taken to the extreme and it starts to "beautify" people without them knowing it, we might see a breakdown of the sense of external identity. A new type of disorder in which people won't recognize their own reflection in the mirror because they look different everywhere else and people who haven't seen them in a while, other than online, will have this dissonance when they meet up with them, as their faces won't match their presence online.

    I think, as philosophers and regular people, our best course of action, regardless of the speed of this AI evolution, would be to constantly extrapolate possible dangers out of each new positive.

    I think it should be obligatory to do so, in order to increase awareness and knowledge more broadly.

    Most of these topics have been common knowledge for philosophers and writers for a long time. But now there's an actual trajectory for the development that makes it easier to actually extrapolate the actual emerging factors the shows up.
  • Isaac
    10.3k
    The onboard AI "improves" your photos as a standard. If that is taken to the extreme and it starts to "beautify" people without them knowing it, we might see a breakdown of the sense of external identity. A new type of disorder in which people won't recognize their own reflection in the mirror because they look different everywhere else and people who haven't seen them in a while, other than online, will have this dissonance when they meet up with them, as their faces won't match their presence online.Christoffer

    How's that any different from make-up?
  • Christoffer
    2.1k
    How's that any different from make-up?Isaac

    Because it's directly related to what they talk about in the video about the TikTok filters. The question is rather, is that a plausible extrapolated danger based on the fact that mobile cameras use manipulation for regular photos to improve them? What happens when the majority of photos being taken use AI to manipulate people's faces? What happens when such manipulation starts to reshape or beautify aspects of someone's face that essentially reconstructs their actual look, even if it's barely noticeable?
  • Isaac
    10.3k
    What happens when the majority of photos being taken use AI to manipulate people's faces? What happens when such manipulation starts to reshape or beautify aspects of someone's face that essentially reconstructs their actual look, even if it's barely noticeable?Christoffer

    Yes. So I'm asking, how is that any different from make-up. Because if it's no different, then you have your answer. Women (mostly) in many cultures will have the features of their face altered by the effect of make-up in virtually all of their public images (bigger eyes, redder lips, higher cheekbones).

    If this has caused a terrible cognitive dissonance, then we can assume AI facial 'beautifying' will do the same. If it hasn't really caused much problem, and people just quickly learn that some women look different without make-up, then it probably won't cause much problems as people will soon learn that about online photographs too. In fact, I suspect online photographs will mostly include make-up already (for those that wear it).

    So, the question "how is it different form make-up?" bears on your question about how it will impact society.
  • Tzeentch
    3.8k
    My guess is that the "fake reality" dimension of AI will result in people no longer trusting anything they see or hear in media, which I would say is already increasingly the case anyway.

    What I find more worrying is the mass surveillance aspect, which, if we are to believe the video, will be able to monitor every aspect of our being and process and use that information, likely for goals which aren't in the common man's best interest.

    That power will then inevitably end up in the hands of the Trumps, the Putins, the Xi Jinpings, the BlackRocks, the Vanguards, (etc.) of this world, who have already shown to possess no moral compass to counterbalance their allotted power.

    That's why I called it a human problem (and not a technological one, or even one unique to AI). The main danger of AI is the prospect of its potential falling in the hands of the wrong people. And given the fact that the entire world is ruled by "the wrong people", it's basically guaranteed that it will.
  • universeness
    6.3k
    Thanks for the responses so far guys and for taking the time to watch the vid.

    What did you think about the opening point of 50% of all current AI experts think there is currently a 10% chance of AI making humans extinct?
    My initial reaction was hey, that means 90% chance that it won't. That is quite good odds on our favour.
    Later on when they spoke about past prediction regarding how long it would take for AI to achieve this or that ability, and that AI was actually achieving such ability, much faster than predicted, then I became a bit uncomfortable again.

    What happens when the majority of photos being taken use AI to manipulate people's faces?Christoffer
    How would this affect facial recognition as a means of security? If someone steals your mobile phone, could they then use AI to access it, by fooling the facial recognition security software?
    The potential for 'increased scamming' via voice simulation or impersonating another's physical characteristics, seemed very concerning indeed, considering the current security methods we all depend on?
    Are there any counter-measures, currently being developed, as this AI Gollum class, gets released all over the world?

    Another way of looking at this is that language (or the core pattern creation and manipulation power therein) has "escaped" into technology which offers it more avenues for expression and proliferation.Baden

    I was quite amazed at the example of the mobile phone radio signals, being used to identify people and their posture, in a room and the projection that such 'language development and interpretation,' could mean that we could all soon, be easily 'monitored.' At the lower levels, stalkers would love that! But even more concerning is that, so would nefarious authority.
    Again the practical question becomes, what counter-measures are available? Will we have to employ features of AI systems, to counter other features of AI systems? Is there an AI security war coming to us all soon? Has it, in fact, already began?

    Everyone should watch that video.Baden
    Insightful and important video, I think a must watch.Wayfarer

    I agree, YOU really really should watch it, @180 Proof, @Athena, @Vera Mont, @T Clark, @Jamal, @Alkis Piskas, @..... @...... @..... everyone on TPF!!
  • universeness
    6.3k
    What did people think of the prediction of 2024, as the last election?
  • Christoffer
    2.1k
    So, the question "how is it different form make-up?" bears on your question about how it will impact society.Isaac

    Fair question. Make-up is however part of the whole experience, they see the same "made-up" face in the mirror every day and there's the physical act of applying and being part of the transformation. But a filter that isn't even known to the user, i.e something working underneath the experience of taking photos and is officially just "part of the AI processing of the images to make them look better", can have a psychological effect on the user since it's not by choice. If that system starts to adjust not just makeup etc. but also facial structure, it could lead to such a physical dissonance.

    What did you think about the opening point of 50% of all current AI experts think there is currently a 10% chance of AI making humans extinct?universeness

    This is the point I'm not so worried about because it's such an absolute outcome. But in combination with something else, like the AI systems pushing biases and people into problems, things we're already seeing in some ways, but to the extreme brink of actual war, and those wars use nukes, then yes. But I just see AI produce more extreme versions of the problems we already have. The major one being distrust of truths, facts, and experts.

    could they then use AI to access it, by fooling the facial recognition security software?universeness

    Facial recognition requires a 3D depth scan and other factors in order to work so I'm not sure it would change that fact, but AIs could definitely hack a phone easier through brute force since it could simulate different strategies that earlier would have required the hacker as a human input, but instead do it millions of times over and over again.

    Are there any counter-measures, currently being developed, as this AI Gollum class, gets released all over the world?universeness

    I guess that would be another AI Golem set to counteract or something. Most likely depends on what the action is. Some can be countered others not.

    What did people think of the prediction of 2024, as the last election?universeness

    I think this is a very real scenario. 2016 US election used algorithms to steer the middle towards a decided choice, which is essentially creating a faux-democratic election in which the actual election isn't tampered with, just the voters.

    It was essentially a way to reprogram gullible or unsure voters into a bias toward a certain candidate and through that basically change the outcome by the will of the customer, without any clear evidence of election fraud. And even when revealed, there was nothing to be done but say "bad Facebook" as there were no laws in place to prevent any of it.

    And today we don't really have any laws for AI in similar situations. Imagine getting bots to play "friends" with people on Twitter and in Facebook groups. An AI analyzes a debate and automatically creates arguments to sway the consensus. Or to be nice to people, and slowly turn their opinions towards a certain candidate.

    Who needs commercials on TV and online when you can just use basic psychology and reprogram a person to vote a certain way? It's easier than people think to reprogram people. Very few are naturally skeptical and rational in their day-to-day life. Just think of gamers who chat with faceless people online for hours and hours. If they believe they're playing with a person who slowly builds up towards an opinion that would change that person's core belief and this is ongoing for a very very long time, then that is extremely effective. Because even if that person realizes this fact later on, they might still keep the core belief they were programmed into believing.

    What good is democracy if you can just reprogram people toward what you want them to vote? It's essentially the death of democracy as we see it today if this happens.
  • Alkis Piskas
    2.1k


    I watched about 20 min of the video. So I cannot speak for the whole of it. But until that point I could not see anything that refers to an inherent danger of AI itself.

    What I will say might sound an oversimplification of the subject, compared esp. with the overwhelming technical information provided in the video. And besides, who am I to talk in front of experts of AI technology, like Tristan Harris and Aza Raskin? Yet, this doesn't prevent me from expressing my opinion on the subject. Esp. if I present another view of the problem ("AI dilemma").

    We live in the era of information. Everything depends on information. And information is mainly digital. (Without of course ignoring written and verbal information from analogue sources.) And digitality refers to computers and computer technology.

    The ethical implications of AI technology are about the same with those of computer technology.
    Hacking, for instance, has produced huge damages around the world and it is always a threat to companies, nations and the humanity. Wars today --like that in Ukraine-- are based on digital technology. Like financial crises and massive problems in various sectors of society are. But can we accuse computer technology for that? Or can we say that computer technology is dangerous? Of course, not. Such a thing would be absurd.

    Science cannot and has no implications. Its mission is to describe and discover things, solve problems and produce results. But its use, i.e. technology, can have. Yet, even that depends on the way it is used.

    AI can be used in a dangerous way or even on purpose to harm. Also, bad quality of product that is created based on AI technology can render it dangerous.

    I personally have not heard of any actually dangerous AI product. And if one is proved to be such, I suppose its production and use would be forbidden by law. And regarding the Internet specifically, there's something called "content blocking", which some countries use to control online access for a variety of reasons: "to shield children from obscene content, to prevent access to copyright-infringing material or confusingly named domains, or to protect national security." (https://www.eff.org/el/issues/content-blocking)

    Now, coming closer to the "AI dilemma", I will take the example of chatbots, esp. ChatGPT and Bard, which are discussed a lot these days. I believe that here too, we cannot blame the technology, as long as we understand its limitations and reliability. The chatbots indeed hide a danger, if these two factors are not taken into consideration: the transmission and spread of misinformation. So, what is needed here, as well as in a lot similar cases and cases of security dangers, is proper education. This is a very crucial part of our information world, which, unfortunately, we are not taking it seriously enough and even we tend to ignore.

    So, in my opinion, the dilemma is not about AI. It's about our will and ability 1) to educate people appropriately and 2) to control its use, i.e. use it in a really productive, reliable, and responsible way.
  • universeness
    6.3k
    But I just see AI produce more extreme versions of the problems we already have. The major one being distrust of truths, facts, and experts.Christoffer

    Yep, I share that 'immediate' concern.

    Facial recognition requires a 3D depth scan and other factors in order to work so I'm not sure it would change that factChristoffer
    So how about an AI attached to a 3D printer, producing a 3D mask, the perp could paint and wear? :scream:

    And today we don't really have any laws for AI in similar situations. Imagine getting bots to play "friends" with people on Twitter and in Facebook groups. An AI analyzes a debate and automatically creates arguments to sway the consensus. Or to be nice to people, and slowly turn their opinions towards a certain candidate.Christoffer

    Yep, another concern I agree with.

    What good is democracy if you can just reprogram people toward what you want them to vote? It's essentially the death of democracy as we see it today if this happens.Christoffer

    Would people be so easily fooled however, if they know this is happening. Surely we would come up with a counter measure, once we know it's happening. Could a 'counter' AI intervene and point out to the viewer that they are being duped. But then how do we know which AI is the 'good guy?'
    Surely those 'good guy's' in power must see the dangers Tristan and Asa are pointing to!
  • universeness
    6.3k
    I watched about 20 min of the video. So I cannot speak for the whole of it. But until that point I could not see anything that refers to an inherent danger of AI itself.Alkis Piskas

    Oh you so need to watch the rest Alkis!

    Tell me what you thought of the example of AI learning the language produced via FMRI (Functional magnetic resonance imaging) scans. The two examples produced by the AI from ONLY analysing what it has learned from the data available from all FRMI scans performed on humans so far and developing that into a language!
    If AI can learn to understand what our brain is 'thinking' then wow.......... wtf?
    AI can't currently scan our brain from a distance, but future AI may be able to create such a tech quite easily based on current FRMI machines.
    Maybe in the future we will all need to wrap our heads in tin foil!!!! :lol:

    tin-foil-hats-696x477.jpg
  • Isaac
    10.3k
    Make-up is however part of the whole experience, they see the same "made-up" face in the mirror every day and there's the physical act of applying and being part of the transformation. But a filter that isn't even known to the user, i.e something working underneath the experience of taking photos and is officially just "part of the AI processing of the images to make them look better", can have a psychological effect on the user since it's not by choice.Christoffer

    Really? This is a hidden feature not openly declared?
  • Christoffer
    2.1k
    Would people be so easily fooled however, if they know this is happening. Surely we would come up with a counter measure, once we know it's happening.universeness

    It already happened in and around 2016 before we started putting pressures on data-collecting through social networks, but it's still not rigid enough to counter all use of this function. Data is already collected for marketing purposes, so the rules on what levels of data to use (official profile information, subscribed outlets etc. or deeper; location of posts with context, private messages etc. ) are what defines the morality of this praxis. Different nations also have different levels of laws in this. Europe has much better protective laws than the US for example, which led to GDP praxis.

    So, I would say it's safer to be in the EU when AI systems hit since the EU more often than not is early in installing safety laws compared to the rest of the world. But since all of this is different around the globe, some nations will have AI systems that just have no restrictions, and that can spread if not watched carefully, much easier than human-controlled algorithms right now.

    If AI can learn to understand what our brain is 'thinking' then wow.......... wtf?universeness

    Imagine an actual lie detector that is accurate. It would change the entire practice of law. If you could basically just tap into the memory of a suspect and crosscheck that with witnesses, then you could, in theory, skip an entire trial and just check if the suspect actually did it or not.

    But since AI systems, at least at the moment, function in working in bias, it might not be accurate with just one scan of the suspect since the brain can actually remember wrong. That's why witnesses need to be scanned and all that crosschecked against the actual evidence of the crime scene. But if the suspect's memories show an act that correlates with the evidence on the crime scene, meaning, all acts you see in the scan had the same consequences as what can be read from that crime scene, then I would argue that this is more accurate than any method we have right now, except maybe DNA. In cases where there's very little evidence to go by, it could be a revolution in forensics, just like how DNA was.

    This concept can be seen in the episode "Crocodile" of the Black Mirror series.

    Really? This is a hidden feature not openly declared?Isaac

    It is open in that all manufacturers adhere to the concept of improving settings according to portrait photography. But the question here is, what does that mean? A portrait photographer may go through sculpting light, makeup, different lenses, sensors, and color science within the hardware and sensor of the camera. But it can also mean post-processing; retouching in Photoshop, the manipulation of the model's facial features like; skin quality, facial hair, and even facial bone structure changes in order to fit a contemporary trend in portrait photography.

    So, since the "standards" of what these companies view as "portrait photography" aren't clearly defined, we actually don't know what changes are being made to the photos we take. These settings are turned on by default in mobile cameras since it's the foundation for all marketing of these systems. When you see a commercial about the iPhone's brand new camera and how good it is at taking photos that "rival DSLR cameras", you are witnessing images that were highly processed by the onboard AI or neural chip to fit the "standard" that Apple defined for photography. If these standards start to include beautifying faces based on someone's definition of what is a "beautiful face", then the standard used could incorporate facial reconstruction, small changes that might not be noticeable at first glance, but unknowingly to the user, changing the appearance of the user as the standard setting of the camera system.

    On top of this, if a system then uses AI to figure out what is "beautiful" based on big data, we will further push people's appearance in photos toward a "standardized beauty" because of that data bias. This could lead to the same effect as people getting mental health issues from normal marketing standards of beauty pushing them to pursue that standard but in an extreme way of actually being that mirror laughing back at you every time you take a photo of yourself compared to what you see in the mirror.

    So, an openly declared feature of AI assisted cameras on mobile phones is not the same thing as openly defining what standards of "portrait photography" that is being used.


    It's happening right now, the question is, what will more advanced AI do to these functions and what would the unintended consequences be?
  • universeness
    6.3k
    So, I would say it's safer to be in the EU when AI systems hit since the EU more often than not is early in installing safety laws compared to the rest of the world. But since all of this is different around the globe, some nations will have AI systems that just have no restrictions, and that can spread if not watched carefully, much easier than human-controlled algorithms right now.Christoffer

    It seemed to me that what Tristan and Asa were warning about has little or no current legislation that would protect us from it's deployment by nefarious characters, only interested in profiteering.
    They seemed to also suggest that any subsequent legislation, would be too little too late.
    I do think we are potentially handing a whole new set of powerful weaponry to those nefarious humans amongst us without first establishing strong defences.

    Surely they should be running well considered simulations of the consequences of this or that AI ability being released into the public sphere. In my own teaching of Computing Science, we even taught secondary school pupils the importance of initial test methodologies, such as the DMZ (De-Militarised Zone) method of testing software to see what affects it would have before it was even involved in any kind of live trial.

    Imagine an actual lie detector that is accurate. It would change the entire practice of law. If you could basically just tap into the memory of a suspect and crosscheck that with witnesses, then you could, in theory, skip an entire trial and just check if the suspect actually did it or not.Christoffer

    But surely if AI becomes capable of such ability, then such would not be introduced before protection against such possible results as the 'thought police' (Orwell's 1984) or the pre-crime dystopian idea dramatised in the film 'Minority report,' etc, is established.

    In one sense, it's great if AI can help catch criminals, and Tristan and Asa did talk about some of the advantages that this Gollum class of current AI will bring, BUT if it also brings the kind of potential for very powerful new ways to scam people etc then the one's who release it will have hell to pay,
    If the people who suffer, track the cause back to them.
  • Isaac
    10.3k


    Thanks for the detail. Yes, I see the issue if changes are being made without the knowledge of the person whose image it is. I'm not sure I share your level of concern though (I'm more inclined to think people will just come to terms with it), but I see how one might be more concerned.
  • Benkei
    7.8k
    This ship has sailed and government will be too slow to act. Delete social media, ignore marketing and read a book to manage your own sanity.
  • Christoffer
    2.1k
    It seemed to me that what Tristan and Asa were warning about has little or no current legislation that would protect us from it's deployment by nefarious characters, only interested in profiteering.universeness

    It's with topics like this that the EU is actually one of the best political actors globally. Almost all societal problems that arise out of new technology have been quickly monitored and legislated by the EU to prevent harm. Even so, you are correct that it's still too slow in regards to AI.

    In my own teaching of Computing Science, we even taught secondary school pupils the importance of initial test methodologies, such as the DMZ (De-Militarised Zone) method of testing software to see what affects it would have before it was even involved in any kind of live trial.universeness

    Nice to speak with someone who's teaching on this topic. Yes, this is what I mean by the importance for philosophers and developers to extrapolate dangers out of a positive. The positive traits of new technology are easily drawn out on a whiteboard, but figuring out the potential risks and dangers can be abstract, biased, and utterly wrong if not done with careful consideration of a wide range of scientific and political areas. There has to be a cocktail effect incorporating, psychology, sociology, political philosophy, moral philosophy, economy, military technology, and technological evaluation of a system's possible functions.

    These things are hard to extrapolate. It almost requires a fictional writer to make up potential scenarios, but more based on the actual facts within the areas listed. A true intuition that leaves most bias behind and honestly looks at the consequences. This is what I meant by the debate often polarizing the different sides into stereotypical extremes of either super-positive or super-negative, for and against AI, but never accepting AI as a reality and still working on mitigating the negatives.

    That's the place where society needs to be right now, publicly, academically, politically, and morally. The public needs to understand AI much faster than they are right now, for their own sake in terms of work and safety, as well as to protect their own nation against a breakdown of democracy and societal functions.

    The biggest problem right now is that too many people regard AI development as something "techy" for "tech people" and those interested in such things. That will lead to a knowledge gap and societal collapse if AI takes off so extreme that it fundamentally changes how society works.

    But surely if AI becomes capable of such ability, then such would not be introduced before protection against such possible results as the 'thought police' (Orwell's 1984) or the pre-crime dystopian idea dramatised in the film 'Minority report,' etc, is established.universeness

    Some nations will, since not all nations have the same idea about law and human rights. It might be an actual reality in the future. The UN would probably ban such tech and these nations will be pariah states, but I don't think we could change the fact that it could happen and probably will happen somewhere.

    In one sense, it's great if AI can help catch criminals, and Tristan and Asa did talk about some of the advantages that this Gollum class of current AI will bringuniverseness

    If the tech is used for forensic reasons, I think this would pressure a lot of potential criminals to not do crimes. Imagine a crime that someone knows they can do without anyone ever knowing they did it. With this tech, that doesn't matter, they will be caught by just scanning all the people who were close to the crime. Why do crime if the possibility of being caught is so high that it's almost a guarantee? However, crimes will still happen since crimes happen since crime has an internal logic that isn't usually caring for the possibility of getting caught. Most crimes being committed are so obvious that we wonder how the criminal would ever be as stupid as they were. But the crimes that are unseen, especially high up in society where people always get away through pure power, loyalty, corruption etc. That's something that might improve. Like, just put the scan on Trump and you have years of material to charge him for.

    I'm not sure I share your level of concern though (I'm more inclined to think people will just come to terms with it), but I see how one might be more concerned.Isaac

    It was just one example of an extrapolation, so I'm not sure I'm as concerned either, but it's important to "add it to the list" of possible actual consequences. Just like they describe in the video, the number of consequences that arose out of the last 15 years of internet development was unseen at the time, but have ended up being much more severe than anyone could have imagined... because they didn't really care to imagine them in the first place.

    This ship has sailed and government will be too slow to act. Delete social media, ignore marketing and read a book to manage your own sanity.Benkei

    That's the solution to the previous problem with the rise of internet and social media, but the current development of AI might creep into people's life even if they did shut down their social media accounts and read a book instead.

    I don't think anyone should ignore this development, it's what created all the previous problems in the first place.

    ?u=https%3A%2F%2F4.bp.blogspot.com%2F-wfvWXZtANkw%2FV6L6HvyVgYI%2FAAAAAAAAIZM%2F4EpOPzhE1T4r1PwkuJ3o6hXE1HXihPbjQCLcB%2Fs1600%2Fthis-is-not-fine.png&f=1&nofb=1&ipt=dee4af422413539c07eaa7915235e75015f11bc213c9ba9ab3588ade8f5cb388&ipo=images
  • universeness
    6.3k
    It's with topics like this that the EU is actually one of the best political actors globally. Almost all societal problems that arise out of new technology have been quickly monitored and legislated by the EU to prevent harm. Even so, you are correct that it's still too slow in regards to AI.Christoffer

    How do you know this? Are you familiar with the details involved via your career, past or current?

    Nice to speak with someone who's teaching on this topic.Christoffer
    Well, I took early retirement from teaching Computing Science 4 years ago.

    The positive traits of new technology are easily drawn out on a whiteboard, but figuring out the potential risks and dangers can be abstract, biased, and utterly wrong if not done with careful consideration of a wide range of scientific and political areas. There has to be a cocktail effect incorporating, psychology, sociology, political philosophy, moral philosophy, economy, military technology, and technological evaluation of a system's possible functions.Christoffer
    I agree that is broadly, what is required but Tristan and Asa seem to be suggesting, that such precaution, is just not happening and with all due respect to @Benkei, et al, some folks have already given up the fight!

    These things are hard to extrapolate. It almost requires a fictional writer to make up potential scenarios, but more based on the actual facts within the areas listed.Christoffer

    I don't think that's true. I agree that fully exhaustive testing is not possible or practical but human experts are very good at testing systems rigorously, when time, money and profiteering are not the main drivers.

    This is what I meant by the debate often polarizing the different sides into stereotypical extremes of either super-positive or super-negative, for and against AI, but never accepting AI as a reality and still working on mitigating the negatives.Christoffer

    I agree, such concern is probably why Mr Harris and Mr Raskin made the vid they made. They did highlight in the video, the example of how humans, eventually gained some control over the development of nuclear weapons. M.A.D was the main motivator in that example, imo, and the Russian invasion of Ukraine, the Chinese interest in Taiwan, the mad leadership of North Korea, etc, shows we are still not 'beyond' the threat of a global nuclear war. It remains one of my hopes that a future AGI might even save us from such threats.

    That's the place where society needs to be right now, publicly, academically, politically, and morally. The public needs to understand AI much faster than they are right now, for their own sake in terms of work and safety, as well as to protect their own nation against a breakdown of democracy and societal functions.Christoffer
    I agree, especially if most of our politicians are not fully aware of the clear and present dangers described in the video. Perhaps it's time for us all to write to/email, the national politician that represents the region we each live in, and ask them to watch the video! We all have to try to be part of the solutions.

    The UN would probably ban such tech and these nations will be pariah states, but I don't think we could change the fact that it could happen and probably will happen somewhere.Christoffer

    Yeah, 'sods law,' has always proved repeatably demonstrable, historically speaking!

    That's something that might improve. Like, just put the scan on Trump and you have years of material to charge him for.Christoffer
    An AI scan of Trump's thoughts may become a moment of important scientific discovery, as I think the result would be the first AI, that digitally throws up!
    I am imagining the user interface animation involved, right now, that I think the AI would automatically produce.
  • Benkei
    7.8k
    I agree that is broadly, what is required but Tristan and Asa seem to be suggesting, that such precaution, is just not happening and with all due respect to Benkei, et al, some folks have already given up the fight!universeness

    Because it's not a matter of regulation, which is never universal, but ethics and culture. Law is more about economics than anything else. Since this is a money maker, laws will aim at maximising profit first at the detriment of protection for people. The EU is no different.
  • universeness
    6.3k

    Law is one method to help control human behaviour.
    Reasoned argument about the common good is another. There are many more methods available.
    The struggle between those who are part of the solutions and those who are part of the problem, will continue.
    The warnings regarding this gollum class of AI being clarion called by the video in the OP, CAN BE contained. Perhaps in a similar way, to how the human race has been able to prevent it's own extinction via nuclear weapons....at least so far.
    AI offers great benefits but as has always been the case with new tech, there are many dangers involved as well. The human race is NOT utterly incapable of containing the threats presented by this gollum class of AI's. I think that's the most important conviction to have at this point.

    Since this is a money maker, laws will aim at maximising profit first at the detriment of protection for people.Benkei
    So do you agree that this is an outcome that we must all refuse to accept?
  • Bylaw
    559
    So, in my opinion, the dilemma is not about AI. It's about our will and ability 1) to educate people appropriately and 2) to control its use, i.e. use it in a really productive, reliable, and responsible way.Alkis Piskas
    The problem with AI (and also with genetically modified organisms and nanotech) is its potential to not be local at all when we mess up. With all our previous technologies we have been able to use them (hiroshima, nagasaki and the tests) or make mistakes (anything from Denver Flats to Fukushima to Chernobyl) and have these be local, if enormous effects. If we make a serious boo boo with the newer technologies, we stand a chance of the effects going everywhere on the planet and potentially affecting every single human (and members of other species, in fact perhaps through them us). And we have always made boo boos with technologies. So, yes, control its use, make sure people are educated. But then, we've done that with other technologies and made boo boos. Right now much of the government oversight of industry (in the US for example) is owned by industry. There is revolving door between industry and oversight. There is financing of the oversight by industry (with the FDA for example). The industries have incredible control over media and government, by paying for the former and lobbying the latter and campaign finance.

    I see little to indicate we are ready for serious mistakes in these new technologies. Ready to prevent them. Mature enough at corporate or government level to really weigh the risks at the levels necessary.
  • Alkis Piskas
    2.1k
    The problem with AI (and also with genetically modified organisms and nanotech) is its potential to not be local at all when we mess up. With all our previous technologies we have been able to use them (hiroshima, nagasaki and the tests) or make mistakes (anything from Denver Flats to Fukushima to Chernobyl) and have these be local, if enormous effectsBylaw
    I thought that you would mention that. But the atomic bombing at Nagasaki was like an experiment. A bad one of course. But we saw its horrible effects and haven't tried again. Yet, during the whole Cold War period l remember we were were saying that it only takes a crazy, insane person to "press the button" It would need much more than that, of course, but still the danger was visible. And it still is today, esp. when more countries with atomic weapons have entered the scene since then.

    If we make a serious boo boo with the newer technologies, we stand a chance of the effects going everywhere on the planet and potentially affecting every single humanBylaw
    There are a lot of different lkinds of "boo boos" that we can make that are existential threats, which are much more visible and realistic than AI's potential dangers.
    Indeed, a lot of people are talking or asking about potential dangers in AI tehchnology. Yet, I have never heard about a realistic, practical example of such a denger. Most probably because such a thing would most probably require to resort to sci-fi novels and movies, where robots overtake the planet and all that crap.

    There are a lot of major existential threats for humanity based on technology: Nuclear war (nuclear technology), climate change (various technologies), engineered pandemics (biotechnology). Recently we have started to talk about one more technology that can theaten humanity: that of Artificial Inteligence. The main danger is supposed to be a development of AI that surpasses human abilities and that humans would not be able to control. However, I personally can't think of any particular example that would create such a danger.

    Dangers created by humans can be always controlled and prevented. It's all a question of will, responsbility and choice.

    The only thing that humans cannot control is, natural catastrophies.

    Right now much of the government oversight of industry (in the US for example) is owned by industry. ... The industries have incredible control over media and government, by paying for the former and lobbying the latter and campaign finance.Bylaw
    Right.
  • Bylaw
    559
    I thought that you would mention that. But the atomic bombing at
    Nagasaki was like an experiment. A bad one of course. But we saw its horrible effects and haven't tried again. Yet, during the whole Cold War period l remember we were were saying that it only takes a crazy, insane person to "press the button" It would need much more than that, of course, but still the danger was visible. And it still is today, esp. when more countries with atomic weapons have entered the scene since then.
    Alkis Piskas
    Yes, so far we haven't gone to a full exchange of nukes or any tactical use of nukes. But these wouldn't be mistakes. They would be conscious choices. My point bringing in nuke use was that even nukes, as long as they are single instances, or single leaks or catastrophies, are still local not global. Chernobyl would have be partly global, if the worst case scenario hadn't been offest by some extremely brave tech and scientists who were ingenious and paid with their lives. But in general, still
    things like Fukishima are local. New tech like AI is potentiall global.
    There are a lot of different lkinds of "boo boos" that we can make that are existential threats, which are much more visible and realistic than AI's potential dangers.
    Indeed, a lot of people are talking or asking about potential dangers in AI tehchnology. Yet, I have never heard about a realistic, practical example of such a denger.
    Alkis Piskas
    Stephen Hawking and Elon Musk et al did not rely on sci-fi.
    https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence
    There are plenty of people who work within the field of AI or have, or are scientists in related fields, or professionals who have well grounded concerns not based on science fiction.
    Dangers created by humans can be always controlled and prevented. It's all a question of will, responsbility and choice.Alkis Piskas
    But it is precisely humans that need to control that which they do not control. My point is that we have not controlled technology previously and have had serious accidents consistantly. Fortunately the devastatation has been local. With our new technologies mistakes can be global. There may be no learning curve possible.
  • Alkis Piskas
    2.1k

    Thank you very much about this feedback. Very useful, esp. the Wikipedia link. :up:

    My point bringing in nuke use was that even nukes, as long as they are single instances, or single leaks or catastrophies, are still local not global.Bylaw
    Yes, I know what you said and meant. But we cannot know how "non-local" these incidents can be, i.e. how much "global" can the go.
    But. as I said, they are other techonological fields than nuclear than can easily go global. For instance, a virus like Covid-19 (assuming that it has been a byproduct of biotechnology) could be fatal. Its spreading could be out of control. Yet, we fon't hear often talking about the dangers of biotechnology, which certainly exist and are more important and crucial than those of AI technology. Below I mention about the superhype regarding "AI risks".

    Stephen Hawking and Elon Musk et al did not rely on sci-fi.Bylaw
    Of course not. He was a scientist and he is a technology expert, resp/ly. The opposite happens: sci-fi people take their ideas from science and technology and inflate, misrepresent and make them sound menacing or fascinating, for profit.

    I read the article. Thank you. Quite interesting, In fact, for a moment I thought that there might be indeed real AI existential risks --I mean, inherent to AI technology and in the long-term-- that I couldn't think of. But I didn't get such a picture. Below are my comments on the article:

    The article says that there is also another side of the issue:
    "Professor Oren Etzioni, who believe the AI field was being "impugned" by a one-sided media focus on the alleged risks. The letter contends that:
    The potential benefits (of AI) are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."

    (Highlighting is my own.) Warning about potential pitfalls is of course a must, but I can't see what thes could be. That is, it's just a general warning. We usually see much more specific and stressed out warnings in other technological fields.

    Also:
    "Professor Bart Selman of Cornell University, said the purpose is to get AI researchers and developers to pay more attention to AI safety. In addition, for policymakers and the general public, the letter is meant to be informative but not alarmist."
    (Highlighting is my own.) Right. This puts the problem in the right perspective. Because what I hear in here and elsewhere about AI risks have an alarming color.

    In the section "Concerns raised by the letter", we read:
    "Humans need to remain in control of AI; our AI systems must "do what we want them to do". The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of computer science, such as computer security and formal verification. Challenges that arise are divided into verification."
    (Highlighting is my own.) As we see, computer science is also involved here, about which I talked too in my comments. AI and computer science are two things that go together. If we talk about inherent or potential dangers of AI, we must also talk about inherent or potential dangers of computer science, something which has never come to my attention.

    Well, once more, concrete examples of AI's long-term dangers are missing. About short-term concerns, they give the example of a self-driving car, which is one of the first things I thought about regarding AI technology dangers. I also thought about automatic pilots in airplaines. (Both are called "autopilots".) And I also mentioned that dangers may come from bad, defective AI technology or applications. Which is not to be taken as AI inherent or potential dangers, which is what this topic and video talk about, i.e. warnings about and dangers of AI.
    No example though is offered for long-term dangers. And this is why I said that these exist only in the sci-fi sphere.

    My point is that we have not controlled technology previously and have had serious accidents consistantly.Bylaw
    Yes, I know.

    Thanks for your feed! :up:
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.