Comments

  • The "AI is theft" debate - An argument
    You ask "Why is B theft?" but your scenario omits any legal criteria for defining theft, such as whether B satisfies a set threshold of originality.

    How could we know whether B is theft when you don't show or describe its output, only its way of information processing. Then, by cherry picking similarities and differences between human and artificial ways of information processing, you push us to conclude that B is not theft. :roll:
    jkop

    Because the issue that the whole argument I made is about... is that there are claims of copyright infringement put on the process of training these models, not the output. When people scream "theft!" to the tech companies, they are screaming at the process of training the models using copyrighted material. What I demonstrated with the scenarios is that the process does not fall under copyright infringement, because it's an internal process that is behind closed doors, either inside our head or in the AI lab. And so, that process cannot be blamed for copyright infringement and the companies cannot be blamed to violate any copyright other than the output.

    Because of this, the output is a question of alignment and the companies are actively working towards mitigating accidental plagiarism. Which means they're already working to adress the problems that artists don't like about AI generations. And the user is then solely the one responsible for how they use the generated images and are solely the ones who need to make sure they don't end up with plagiarized content.

    But the main issue, why I'm making this argument, is that none of this matters for the artists doing lawsuits. They are attacking the first part, the process, the one that the scenario A and B is about. And therefore shown to not be interested in alignment or making sure these models are safe from plagiarism. Instead, they either have no knowledge of the technology and make shit up about how it is theft, things that aren't true about how the technology works because they think the companies just take their stuff and put it out there. Or they know how the technology works, but they intentionally target the part of the technology that would kill the models as an attempt to destroy the machines as the luddites did. Both of these stances are problematic and could lead to court rulings at a loss for artists, effectively giving less voice to artists in this matter, rather than enforcing them.

    The argument is about focusing on alignment and how to improve the outputs past plagiarism. It's about making sure LLMs always cite something if they use direct quotes, and have guardrails that self-analyze the outputs to make sure it falls within our rather arbitrary definitions of originality.

    Because people stare blindly into the darkness about these AI models. The positive sides of them, all the areas in which we actually benefit, like medical, sciences and even artists in terms of certain tools, are actually partly using the same models that are trained on this copyrighted material, because the amount of data is key to the accuracy and abilities of the models. So when artist want to block the usage of copyrighted material in training data, they're killing far more than they might realize. If a cancer drug development is utilizing GPT-4 and they suddenly must shut it down and retrain on less data, it will stop the development of that drug as well as maybe not be able to continue if a reworked model doesn't function the same due to the removal of a large portion of training data.

    People, simply don't understand this technology and run around screaming "theft!" just because others scream "theft!". There's no further understanding and no further nuance to this topic and this simplified and shallow version of the debate needs to stop, for everyone's sake. These models are neither bad or good, they're tools and as such it's the usage of the tools that needs to be adressed, not let luddites destroy them.
  • The "AI is theft" debate - An argument
    One difference between A and B is this:

    You give them the same analysis regarding memorizing and synthesizing of content, but you give them different analyses regarding intent and accountability. Conversely, you ignore their differences in the former, but not in the latter.
    jkop

    No, the AI system and the brain function in their physical process has no accountability because you can only put guilt on something that has subjective intent. And I've described how intent is incorporated into each. The human has both the process function AND the intent built into the brain. The AI system only has the process system and the intent is the user of that system. But if we still put responsibility on the process itself, then it's a problem with alignment and we can fine tune the AI system to align better. Even better than we can align a human as human emotion comes in the way of aligning their intent. Which is why accidental plagiarism happens all the time. We simply aren't smart enough in comparison to an AI model that's been properly aligned with copyright law. Such a system will effectively be better than a human at producing non-copyrighted material set within a decided value of "originality".
  • The "AI is theft" debate - An argument
    The processors in AI facilities lack intention, but AI facilities are owned and operated by human individuals and corporations who have extensive intentions.BC

    And those extensive intentions are what, in your perspective? And in what context of copyright do those intentions exist?

    AGI doesn't necessarily have to think exactly like us, but human intelligence is the only known example of a GI that we have and with regards to copyright laws it's important that the distinction between an AGI and a human intelligence not be that all that wide because our laws were made with humans in mind.Mr Bee

    Not exactly sure what point you're making here? The only time in which copyright laws apply to the system itself and independent of humans either on the back or front end is when an AGI shows real intelligence and provable qualia, but that's a whole other topic on AI that won't apply until we're actually at that point in history. That could be few years from now, 50 years or maybe never, depending on things we've yet to know about AGI and super intelligence. For now, the AGI system's that are on the table mostly just combine many different tasks so that if you input a prompt it will plan, train itself and focus efforts towards a the goal you asked for without constant monitoring and iterative inputs from a human.

    Some believe this would lead to actual subjective intelligence for the AI, but it's still so mechanical and lacking the emotional component that's key to how humans structure their experience that the possibility for qualia is pretty low or non-existent. So the human input, the "prompter" still carries the responsibility of its use. I think, however, that the alignment problem becomes a bigger issue with AGI as we can't predict in what ways an AGI plan and execute for a specific goal.

    This is also why AGI can be dangerous, like the paperclip scenario. With enough resources at its disposal it can spiral out of control. I think that the first example of this will be a collapse of some website infrastructure like Facebook as the AGI ends up flooding the servers with operations due to a task that spirals out of control. So before we see nuclear war or any actual dangers we will probably see some sort of spammed nonsense because an AGI executed a hallucinated plan for some simple goal it was prompted to do.

    But all of that is another topic really.

    The question is whether or not that process is acceptable or if it should be considered "theft" under the law. We've decided as a society that someone looking at a bunch of art and using it as inspiration for creating their own works is an acceptable form of creation. The arguments that I've heard from the pro-AI side usually tries to equate the former with the latter as if they're essentially the same. That much isn't clear though. My impression is that at the very least they're quite different and should be treated differently. That doesn't mean that the former is necessarily illegal though, just that it should be treated to a different standard whatever that may be.Mr Bee

    The difference between the systems and the human brain has more to do with the systems not being the totality of how a brain works. It's simulating a very specific mechanical aspect of our mind, but as I've mentioned it lacks intention and internal will, which is why inputted prompts need to guide these processes towards a desired goal. If you were able to add different "brain" functions up to the point that the system is operating on identical terms as the totality of our brain, how do laws for humans start to apply on the system? When do we decide it having agency enough to be the one responsible for actions?

    But the fundamental core to all of this is whether or not copyright laws apply to a machine that merely operate on simulating a human brain function. It may be that neural networks that are floating and constantly reshape and retrain itself on input data is all there is to human consciousness, we don't know until we reach that point for these models. But in the end it becomes rather a question of how copyright laws function within a simulation of how we humans "record" everything around us in memory and how we operate on it.

    Because when we compare these systems to that of artists and how they create something, there are a number of actions by artists that seem far more infringing on copyright than what these systems do. If a diffusion model is trained on millions of real and imaginary images of bridges, it will generate a bridge that is merely a synthesis of them all. And since there's only a limited number of image perspectives of bridges that are three-dimensionally possible, where it ends up will weight more towards one set of images than others, but never a single photo. An artist, however, might take a single copyrighted image and trace-draw on top of it, essentially copying the exact composition and choice of perspective from the one who took the photograph.

    So if we're just goin by the definition of a "copy" or that the system "copies" from the training data, it rather looks like there are more artists actually copying than there are actual copying going on within these diffusion models.

    Copyright court cases have always been about judging "how much" was copied. It's generally about defining how many notes something was similar to, if lyrics or texts appeared in too many exact words or sentences after another. And they all depend on the ability of the lawyers and attorneys to prove that the actions taken were more or less based on a line drawn in the sand from previous cases that proved or disproved infringement.

    Copyright law has always been shifting because it's trying to apply a definition of originality to determine if a piece of art is infringement or not. But the more we learn about the brain and creative process of the mind, the more we understand of how little free will we actually have and how influential our chemical and environmental processes are in creativity, and how less logical it is to propose "true originality". It simply doesn't exist. But copyright laws demand that we have a certain line drawn in the sand that defines where we conclude something "original", otherwise art and creativity cannot exist within a free market society.

    Anyone who studied human creativity in a scientific manner, looking at biological processes, neuroscience etc. will start to see how these definitions soon become artificial and non-scientific. They are essentially arbitrary inventions that over the centuries and decades since 1709 have gone through patch-works trying to make sure that line in the sand is in the correct place.

    But they're also taken advantage of. With artists that had a lot of power using it against lesser known artists. And institutions who've used it as a weapon to acquire pieces of work from artists who lose their compensation because they didn't have a dozen legal teams behind them fighting for their rights.

    So, what exactly has "society" decided about copyright laws? In my view it seems to be a rather messy power battle rather than truly finding where the line is drawn in the sand. The reason why well-known artists try to prove copyright infringement within the process of training these models is that if they win, they will kill the models as they can't use the data that is necessary to train them. The idea of the existential threat to artists have skewed people's minds into making every attempt to kill these models, regardless of how illogical the reasoning is behind it. But it's all based on some magical thinking about creativity and ignoring the social and intellectual relationship between the artist and the audience.

    So, first, creativity isn't a magic box that produce originality, there's no spiritual and divine source for it and that produces a problem for the people drawing the line in the sand. Where do you draw it? When do you decide something is original? Second, artists will never disappear because of these AI models. Because art is about the communication between the artist and their audience. The audience want THAT artist's perspective and subjective involvement in creation. If someone, artists or hacks who believe they're artists, think that generating a duplicate of a certain painting style through an AI system is going to kill the original artist, they're delusional. The audience doesn't care to experience derivative work, they care only about what the actual artist will do next, because the social and intellectual interplay between the artist and the audience is just as important, if not the most important aspect rather than some derivative content that looks similar. That artists believe they're gonna lose money on some hacks forcing an AI to make "copies" and derivative work out of their style is delusional on both sides of the debate.

    In the end, it might be that we actually need the AI models for the purpose of deciding copyright infringement:

    Imagine if we actually train these AIs on absolutely everything that's ever been created and possible to use as training data. And then we align that system to be used as a filter where we decide the weights of the system to approximately draw that line in the sand, based on what we "feel" is right for copyright laws. Then, every time we have a copyright dispute in the world, be it an AI generation or someone's actual work of art, this artwork is put through that filter and it can spot if that piece of work falls under copyright infringement or not.

    That would solve both the problem with AI generated outputs and normal copyright cases that try to figure out if something was plagiarized.

    This is why I argue for artists to work with these companies for the purpose of alignment rather than battling against them. Because if we had a system that could help spot plagiarized content and define what's derivative, it will not only solve the problems with AI generative content, it will also help artists that do not have enough legal power to win against powerful actors within the entertainment industry.

    But because the debate is so simplified down to two polarized sides and that people's view on copyright laws is this belief that there is a permanent and rigid line in the sand, we end up in a battle about power struggles about other things rather than about artists actual rights, the creativity and the prospects of these AI models.

    Judging the training process to be copyright infringement becomes a stretch and a very wiggly drawn line in that sand. Such a definition start to creep into aspects that doesn't really have to do with copying and spreading files, or plagiarism and derivative work. And it becomes problematic to define that line properly based on how artists themselves work.

    Depends on what we're talking about when we say that this hypothetical person "takes parts of those files and makes a collage out of them". The issue isn't really the fact that we have memories that can store data about our experiences, but rather how we take that data and use it to create something new.Mr Bee

    Then you agree that the training process of AI models does not infringe on copyright and that it's rather the problem of alignment, i.e how these AI models generate something and how we can improve them not to end up producing accidental plagiarism that the focus should be on. And as I mentioned above, such a filter in the system or such an additional function to spot plagiarism would maybe even be helpful to determine if plagiarism has occurred even outside AI generations; making copyright cases more automatic and fair to all artists and not just the ones powerful enough to have a legal teams acting as copyright special forces.

    Because a court looks at the work, that's where the content is manifest, not in the mechanics of an Ai-system nor in its similarities with a human mind.jkop

    If the court look at the actual outputted work, then the training process does not infringe on copyright and the problem is about alignment, not training data or the training process.

    Defining how the system works is absolutely important to all of this. If lots of artists use direct copies of other's work in their own work and such work can pass copyright after a certain level of manipulation, then something that never use direct copies should also pass copyright. How a tool or technology function is absolutely part of how we define copyright. Such rulings have been going on for a long time and not just in this case:

    https://en.wikipedia.org/wiki/White-Smith_Music_Publishing_Co._v._Apollo_Co.
    https://en.wikipedia.org/wiki/Williams_%26_Wilkins_Co._v._United_States
    https://en.wikipedia.org/wiki/Sony_Corp._of_America_v._Universal_City_Studios,_Inc.
    https://en.wikipedia.org/wiki/Bridgeman_Art_Library_v._Corel_Corp.

    What's relevant is whether a work satisfies a set threshold of originality, or whether it contains, in part or as a whole, other copyrighted works.jkop

    If we then look at only the output, there's cases like Mannion v. Coors Brewing Co., in which the derivative work can be argued is even more closely resembled to the original than what a diffusion model produce even when asked to do a direct copy, and yet, the court ruled that it was not copyright infringement.

    So where do you draw the line? As soon as we start to define "originality" and we start to use scientific research on human creativity, we run into the problem of what constitutes "inspiration" or "sources" for the synthesis that is the creative output.

    There is no clear line about what constitutes "originality", so it's not a binary question. AI generation can be ruled both infringement and not, so it all ends up being about alignment; how to make sure the system acts within copyright laws and not that it, in itself, breaks copyright law, which is what the anti-AI movement is trying to prove, on these shaky grounds. And the question of what constitutes "originality" is within the history of copyright cases a very muddy defined concept, to the point that anyone saying the concept is "clear", don't know enough about this topic and has merely made up their own mind about what they themselves believe, which is no ground for any law or regulation.

    There are also alternatives or additions to copyright, such as copyleft, Creative Commons, Public Domain etc. Machines could be "trained" on such content instead of stolen content, but the Ai industry is greedy, and to snag people's copyrighted works, obfuscate their identity but exploit their quality will increase the market value of the systems. Plain theft!jkop

    And now you're just falling back on screaming "theft!" You simply don't care about the argument I've made over and over now. Training data is not theft because it's not a copy and the process mimics how the human brain memorize and synthesize information. It's not theft for a person with photographic memory, so why is it theft for these companies when they're not distributing the raw data anywhere?

    Once again you don't seem to understand how the systems work. It's not about greed; the systems require such a large amount of data to function in a way that makes the technology function properly. The amount of data is key. And HOW the technology works is absolutely part of how we define copyright laws, as described with the cases above. So ignoring how this tech works and just screaming that they are "greeeeedy!" just becomes the same shouting polarized hashtag mantra that everyone else is doing right now.

    And this attitude and lack of knowledge about the technology show up in your contradictions:

    Because a court looks at the work, that's where the content is manifest, not in the mechanics of an Ai-system nor in its similarities with a human mind.jkop
    Machines could be "trained" on such content instead of stolen content, but the Ai industry is greedy... Plain theft!jkop

    ...If the court should just look at the output, then the training data and process is not the problem, but still you scream that this process is theft, even though the court might only be able to look at what the output of these AI models are doing.

    The training process using copyrighted material happens behind closed doors. Just like artists gathering copyrighted material in their process to produce their artwork. If the training process on copyrighted material is identical to an artist using copyrighted material when working, since both appears behind closed doors... the only thing that matters is the final artwork and output from the AI. If alignment is solved, there won't be a problem, but the use of copyrighted material in the training process is not theft, regardless of how you feel about it.

    Based on previous copyright cases, if the tech companies win against those claiming the training process is "theft", it won't be because the companies are greedy and have "corrupted" legal teams, it will be because of the copyright law itself and how it's ruled in the past. It's delusional to think that all of this concludes in "clear" cases of "theft".
  • The "AI is theft" debate - An argument
    A and B are set up to acquire writing skills in similar ways. But this similarity is irrelevant for determining whether a literary output violates copyright law.jkop

    Why is it irrelevant? The system itself lacks the central human component that is the intention of its use. While the human has that intention built in. You cannot say the system violates copyright law as the system itself isn't able to either have copyright on its output or by its own will break copyright. This has been established by the "https://en.wikipedia.org/wiki/Monkey_selfie_copyright_dispute" and would surely apply to an AI model as well. That leaves the user as the sole agent responsible for the output, or rather, for the use of the output.

    Because I can paint a perfect copy of someone else's painting. It's rather in what way I use it that defines how copyright is applied. In some cases I can show my work crediting the original painter, in some cases I can sell it like that, or I will not be able show it at all but still have it privately or unofficially. The use of the output defines how copyright applies and because of that we are far past any stage in which the AI model and its function is involved, if it has at all made anything even remotely close to infringing on copyright.

    It's basically just a tool, like the canvas, paint and paintbrush. If I want to sell that art, it's my responsibility to make sure it isn't breaking any copyright laws. The problem arise when people make blanket statements that all AI outputs break copyright, which is a false statement. And even if there is a ruling that forbid the use of AI systems, they would only be able to criminalize monetization of outputs, not if they're used privately or unofficially within some other creative process as a supporting tool.

    All artists use copyrighted material during their work, painters usually cut out photos and print out stuff to use as references and inspiration while working. So all of this becomes messy for those proposing to shut these AI models down, and in some cases lead to double standards.

    In any case it leads back to the original claim that my argument challenge. The claim that the training process is breaking copyright because it is trained on copyrighted material. Which is what the A and B scenario is about.

    You blame critics for not understanding the technology, but do you understand copyright law? Imagine if the law was changed and gave Ai-generated content carte blanche just because the machines have been designed to think or acquire skills in a similar way as humans. That's a slippery slope to hell, and instead of a general law you'd have to patch the systems to counter each and every possible misuse. Private tech corporations acting as legislators and judges of what's right and wrong. What horror.jkop

    Explain to me what it is that I don't understand about copyright law.

    And explain to me why you make such a slippery slope argument as some kind of "appeal to extremes" fallacy thinking that such a scenario is what I'm proposing. You don't seem to read what I write when I say that artist need to work with these companies for the purpose of alignment. Do you understand what I mean by that? Because your slippery slope scenario tells me that you don't.

    You keep making these strawmans out of a binary interpretation of this debate. That me criticizing how artists argue against AI means I want to rid all copyright law from AI use. That is clearly false.

    I want people to stop making uninformed, uneducated and polarized arguments and instead educate themselves to understand the systems so the correct arguments can be made that make sure artists and society can align with the development of AI. Because the alternative is the nightmare you fear. And when artists and people just shout their misinformed groupthink opinions as hashtags until a court rules against them because they didn't care to understand how these systems work, that nightmare begins.

    If your claim is that similarity between human and artificial acquisition of skills is a reason for changing copyright law, then my counter-argument is that such similarity is irrelevant.jkop

    How do you interpret this being about changing copyright law? Why are you making stuff up about my argument? Nowhere in my writing did I propose we change copyright laws in favor of these tech companies. I'm saying that copyright law does not apply to the training process of an AI model as the training process is not an action of copyright infringement anymore than a person with photographic memory who reads all books in a library. You seem unable to understand the difference between the training process and the output generation? And it's this training process specifically that is claimed to infringe on copyright and the basis of many of the current lawsuits. Not the generative part. Or rather, they've baked these lawsuits into a confused mess of uninformed criticism that with good lawyers on the tech company's side could argue in the same manner I do. And the court requires proof of copyright infringement. If the court rules that there's no proof of infringement in the training process, it could spiral into dismissal of the case, and that sets the stage for a total dismissal of all artists concerns. No one seems to see how dangerous that is. This is why my actual argument that you seem to misunderstand constantly, is to focus on the problems with image generation and create laws that actually dictate mandatory practices for these tech companies to work with artists for the purpose of alignment. That's the only way forward. These artists are now on a crusade to try and rid the world of these AI models and it's a fools errand. They don't understand the models and the technology, and try bite off more than they can chew instead of focusing their criticism properly.

    What is relevant is whether the output contains recognizable parts of other people's work.jkop

    Alignment is work already being conducted by these companies, as I've said now numerous times. It's about making sure plagiarism doesn't occur. It's in everyone's interest that it doesn't happen.

    And the challenge is that you need to define "how similar" something is in order to define infringement. This is the case in every copyright case in court. Artists are already using references that copy entire elements into their own work without it being copyright infringement. Artists can see something in certain colors, then see something else with a nice composition, then see a picture in a newspaper that becomes the central figure in the artwork and they combine all three into a new image that everyone would consider "original". If an AI model does exactly the same, and at the same time only use its neural memory, it's using even less direct references and influences as a diffusion model never copy anything directly into an image. Even older examples of outdated misaligned models that show almost identical images still can't reproduce them exactly, because they aren't using a file as the source, they're using neural memory in the same way we humans do. Compare that to artists who directly use other people's work in their art, it happens more than people realize. Just check how many films in which directors blatantly copy a painting into a shot composition and style, or use an entire scene from another movie almost verbatim. How do you draw the line? Why would the diffusion models and LLMs be worse than how artists are already working? As I said, it ends up being an arbitrary line in which we just conclude... because it's a machine. But as I've said, the machine, like the gun that forced the painter to plagiarize, cannot be blamed for copyright infringement. Only the user can.

    One might unintentionally plagiarize recognizable parts of someone else's picture, novel, scientific paper etc. and the lack of intent (hard to prove) might reduce the penalty but hardly controversial as a violation.jkop

    Yes, which means the alignment problem is the most important one to solve. Yet, as mentioned, if we actually study how artists work, if we check their process, if I check my own process, it quickly becomes very muddy how works of art forms. People saying that art magically appears out of our divine creativity are just religious and spiritual and that's not a foundation for laws. The creative process is part a technological/biological function and part subjective intention. If the function can be externalized as a tool, then how do copyright get defined? Copyright can only be applied to intention, it cannot be applied to the process, otherwise all artists would infringe on copyright in their process of creation.

    In the end, if alignment gets solved for these AI models, to the point they are unable to copy anything over a certain point of plagiaristic level for an output, and that this aligns with copyright laws for definitions of "originality", then these systems will actually be better at avoiding copyright infringement than human artists, because they won't try to fool the copyright system for the purpose of gaining something out of riding on other's success, which is the most common reason why people infringe on copyright outside the accidental. An aligned system does not care, it only sets the guardrails so that the human component cannot step over the line.
  • The "AI is theft" debate - An argument
    There is data, and at a deeper level there is what the data means. Can an AI algorithm ever discover what data means at this deeper level?RussellA

    The system doesn't think, the system doesn't have intention. Neither writer exists within the system, it is the user that informs the intention that guides the system. It's the level of complexity that the system operates on that defines how well that output becomes. But the fact remains that if the engineer program the system not to plagiarize and the user doesn't ask for plagiarism, there's no plagiarism going on anymore than an artist who draws upon their memory of works of art that inspire them. These systems have build in guardrails that attempt to prevent accidental plagiarism, something that occurs all the time by humans and has been spottet within the systems as well. But in contrast to human accidental plagiarism, these systems are getting better and better at discovering such accidents, because such accidents are in no ones interest. It's not good for the artist who's work was part of the training data, it's not good for the user and it's not good for the AI company. No one has any incentive to let these AI models be plagiarist machines.

    But the problem I'm bringing up in my argument primarily has to do with claims that the act of training the AI model using copyrighted material is plagiarism and copyright infringement. That's not the same as the alignment problem of its uses that you are bringing up.

    If alignment keeps getting better, will artists stop criticizing these companies for plagiarism? No. Even if AI models end up at a point where it's basically impossible to get accidental plagiarism, artists will not stop criticizing, because they aren't arguing based on rational reasoning, they want to take down these AI models because many feel they're a threat to their income and they invent lies about how the system operates and about what the intentions are of the engineers and the companies behind these models. They argue that these companies "intentionally targets" them when they don't. These companies invented a technology that mimic how the human brain learn and memorize and how this memory functions as part of predicting reality, demonstrating it with predicting images and text into existence and how this prediction start to emerge other attributes of cognition. It's been happening for years, but is now at a point in which there can be practical applications in society for their use.

    We will see the same with robotics in a couple of years. The partly Nvidia-lead research using LLMs for training robots that was just published showed how GPT-4 can be used in combination with robotics training and simulation training. Meaning, we will see a surge in how well robots perform soon. It's basically just a matter of time before we start seeing commercial robots for generalized purposes or business applications outside of pure industrial production. And this will lead to other sectors in society starting to criticize these companies for "targeting their jobs".

    But it's always been like this. It's the luddites all over the again, smashing the industrial machines instead of getting to know what this new technology could mean and how they could use them as well.

    That's one of the main issues right? How comparable human creativity is to that of AI. When an AI "draws upon" all the data it is trained on is it the same as when a human does the same like in the two scenarios you've brought up?

    At the very least it can be said that the consensus is that AIs don't think like we do, which is why don't see tech companies proclaiming that they've achieved AGI. There are certainly some clear shortcomings to how current AI models work compared to human brain activity, though given how little we know about neuroscience (in particular the process of human creativity) and how much less we seem to know about AI I'd say that the matter of whether we should differentiate human inspiration and AI's' "inspiration" currently is at best unclear.
    Mr Bee

    AGI doesn't mean it thinks like us either. AGI just means that it generalizes between many different functions and does so automatically based on what's needed in any certain situation.

    But I still maintain that people misunderstand these things. It's not a binary question, it's not that we are looking at these systems as A) Not thinking like humans therefore they are plagiarism machines or B) They think like us therefor they don't plagiarize.

    Rather, it is about looking at what constitutes plagiarism and copyright theft in these systems. Copyright laws are clear when it comes to plagiarism and stealing copyrighted material. But they run into problems when they're applied as a blanket statement against these AI models. These AI models doesn't think like us, but they mimic parts of our brain. And mimicking part of our brain is not copyright infringement or theft because if it does so with remarkable similarity, then we can't criticize these operations without criticizing how these functions exists within ourselves. The difference between our specific brain function and these AI systems become arbitrary and start to take the form of spirituality or religon in which the critics falls back on "because we are humans".

    Let's say we build a robot that uses visual data to memorize a street it walks along. It uses machine learning and as a constantly updating neural network that mimics a floating memory system like our own neural network constantly changing with more input data. While walking down the street it scans its surroundings and memorize everything into neural connections, like we do. At some point it ends up at a museum of modern art and goes inside, and inside it memorizes its surroundings, but that also means all the paintings and photographs. Later in the lab we ask it to talk about its day, it may describe its route and we ask it to form an image of the street. It produces an image that somewhat looks like the street, skewed, but with similar colors, similar weather and so on. This is similar to how we remember. We then ask it to draw a painting inspired by what it saw int the museum. What will it do?

    Critics of AI would say it will plagiarize, copy and that it has stored the copyrighted photos and paintings through the camera. But that's not what has happened. It has a neural network that formed out of the input data, it doesn't have a flash card storing it as a video or photo of something. It might draw something that is accidental plagiarism out of that memory, but since the diffusion system generates from a noise through prediction into form, it will always be different than pure reality, different from a pure copy. Accidental plagiarism happens all the time with people and as artists we learn to check our work so it doesn't fall under it. If the engineers push the system to do such checks, to make sure it doesn't get too close to copyrighted material, then how can it plagiarize? Then we end up with a system that does not directly store anything, it remembers what it has seen just like humans remembers through our own neural network, and it will prevent itself from drawing anything too close to an original.

    One might say that the AI system's neural memory is too perfect and would constitute being the same as having it on a normal flash card, but how is that different from a person with photographic memory? It then becomes a question of accuracy, effectively saying that people with photographic memory shouldn't enter a museum as they are basically storing all those works of art in their neural memory.

    Because what the argument in here is fundamentally about is the claim that the act of training AI models on copyrighted material is breaking copyright. The use and alignment problem is another issue and an issue that can be solved without banning these AI models. But the promotion for banning these systems stem from claims that they were trained on copyrighted material. And it is that specific point that I argue doesn't hold due to how these systems operate and how the training process is nearly identical to how humans are "training" their own neural-based memory.

    Let's say humans actually had a flash card in our brain. And everything we saw and heard, read and experienced, were stored as files in folders on that flash card. And when we wrote or painted something we all just took parts of those files and produced some collage out of them. How would we talk about copyright in that case?

    But when a system does the opposite and instead mimic how our brain operate and how we act from memory, we run into a problem as much of our copyright laws are defined based on interpreting "how much" a human "copied" something. How many notes were taken, accidentally or not, how much of another painting can be spotted in this new work etc. But for AI generated material, it seems that it doesn't matter how far off from other's work it is, it could be provably as original as any other human creation deemed "original", but it still gets blamed as plagiarism because the training data was copyrighted material, not realizing that artists function on the same principles and sometimes even go further than these AI systems, as my example of concept artists showed.

    The conclusion, or message I'm trying to convey here is that the attempt to ban these AI models and call their training process theft is just luddite behavior out of existential fear. And that the real problem is alignment to prevent accidental plagiarism, which is something these companies work hard to prevent as it's in no ones interest for that to happen in outputs. That this antagonizing pitch fork behavior that artists and other people have in this context is counter-productive and that they should instead demand to work WITH these companies to help mitigate accidental plagiarism and ill-willed use of these models.

    It's not like photobashing isn't controversial too mind you. So if you're saying that AI diffusions models are equivalent to that practice then that probably doesn't help your argument.Mr Bee

    No, I'm saying diffusion models doesn't do that and that there's a big irony to the fact that many concept artists who are now actively trying to fight AI with arguments of theft effectively have made money in the past through a practice that is the very same process they falsely accuse these diffusion models of doing based on a misunderstanding of how they actually operate. The operation of these diffusion models, compared to that practice, actually makes the model more moral than the concept artists within this context as diffusion models never directly copies anything into the images, since they don't have any direct copies in memory.

    This highlights a perfect example of why artist's battle to ban these models and their reasoning behind it becomes rather messy and could bite them back in ways that destroys far more for them than if they actually tried to help these companies to instead align their models for the benefit of artists.
  • The "AI is theft" debate - An argument
    According to you, or copyright law?jkop

    According to the logic of the argument. Copyright law does not cover these things and the argument I'm making is that there are problems with people's reasoning around copyright and how these systems operate. A user that intentionally push a system to do plagiarism and who carefully manipulate the prompts for that intention, disregarding any warnings by the system and the alignment programming of it to not do so... ends up solely being the guilty one. It's basically like if you asked a painter to make a direct copy of a famous painting and the painter says "no", pointing out that's plagiarism, yet you take out a gun and hold it to the painter's head and demand it. Will any court of law say that the painter, who is capable of painting any kind of painting in the world, is as guilty as you, just because he has that painting skill, knowledge of painters and different paintings, and the technical capability?

    If 'feeding', 'training', or 'memorizing' does not equal copying, then what is an example of copying? It is certainly possible to copy an original painting by training a plagiarizer (human or artificial) in how to identify the relevant features and from these construct a map or model for reproductions or remixes with other copies for arbitrary purposes. Dodgy and probably criminal.jkop

    No one is doing this. No one is intentionally programming the systems to plagiarize. It's just a continuation of the misunderstandings. Training neural network systems is a computer science field in the pursuit of mimicking the human mind. To generalize operation to function beyond direct programming. If you actually study the history of computer science in artificial intelligence, the concept of neural network and machine learning has to do with forming neural networks in order to form operations that act upon pattern recognition, essentially forming new ideas or generalized operation out of the patterns that emerge from the quantity of analyzed information and how they exist in relation to each other. This is then aligned into a system of prediction that emulate the predictive thinking of our brains.

    A diffusion model therefor "hallucinate" forward an image out of this attempt to predict shapes, colors and perspective based on what it has learned, not copies of what it used to learn. And the key component that is missing is the guiding intent; "what" it should predict. It's not intelligent, it's not a thinking machine, it merely mimics the specific process of neural memory, pattern recognition and predictive operation that we have in our brains. So it cannot just predict on its own, it can't "create on its own". It needs someone to guide the prediction.

    Therefore, if you actually look at how these companies develop these models, you will also see a lot of effort put into alignment programming. They do not intentionally align the models to perform plagiarism, they actively work against it, and making sure there are guardrails for accidental plagiarism and block users trying to intentionally produce it. But even so, these systems are black boxes and people that want to manipulate and find backdoors into plagiarism could be able to do so, especially on older models. But that only leads back to who's to blame for plagiarism and it becomes even clearer that it's the user of the system who intentionally want to plagiarize something and solely becomes the one guilty of it. Not the engineers, or the process of training these models.

    You use the words 'feeding', 'training', and 'memorizing' for describing what computers and minds do, and talk of neural information as if that would mean that computers and minds process information in the same or similar way. Yet the similarity between biological and artificial neural networks has decreased since the 1940s. I've 'never seen a biologist or neuroscientist talk of brains as computers in this regard. Look up Susan Greenfield, for instance.jkop

    The idea behind machine learning and neural networks were inspired by findings in neuroscience, but the purpose wasn't to conclude similarities, it was to see if operations could be generalized and improve predictability in complex situations, such as robotics based on experimenting with similarities to what neuroscientists had discovered about the biological brain. It's only just recently that specific research in neuroscience (IBS, MIT etc.) has been focusing on these similarities between these AI models and how the brain functions, concluding that there are striking similarities between the two. But the underlying principles of operation has always been imitating how memory forms. But you confuse the totality of how a brain operates with the specific function of memory and predictability. Susan Greenfield is even aligned with how memory forms in our brain and hasn't published anything to the contrary of what other researchers have concluded in that context. No one is saying that these AI systems acts as the totality of the brain, but the memory in our head exists as a neural network that acts from the connections rather than raw data. This is how neuroscientists describes how our memory functions and operates as the basis for prediction and actions. The most recent understandings is that memories are not stored in parts of the brain, but instead exists as spread across the brain with different regions featuring a more or less concentration of connections based on the nature of the information. Essentially acting like weights and biases in an AI system that focus how memories are used.

    The fact is that our brain doesn't store information like a file. And likewise, a machine-learned neural network doesn't either. If I read and memorize a page from a book (easier if I had photographic memory), I didn't store this page in my head as a file like a computer does. The same goes for a neural network that was trained with this page. It didn't copy the page, it has put it in relation to other pages, other texts, other things in the world that has been part of its training. And just like our brain, if we were to remove the "other stuff", then the memory and understanding of that specific page would deteriorate, because the memory of the page relies on how it relates to other knowledge, other information, about language, about visual pattern recognition, about contextual understanding of the texts meaning and so on. All are part of our ability to remember the page and our ability to do something with that memory.

    Again, I ask... what is the difference in scenario A and scenario B? Explain to me the difference please.

    Your repeated claims that I (or any critic) misunderstand the technology are unwarranted. You take it for granted that a mind works like a computer (it doesn't) and ramble on as if the perceived similarity would be an argument for updating copyright law. It's not.jkop

    It's not unwarranted since the research that's being done right now continues to find similarities to the point that neuroscientists are starting to utilize these AI models in their research in order to further understand the brain. I don't see anyone else in these debates actually arguing out of the research that's actually being done. So how is it unwarranted to criticize others for not fully understanding the technology when they essentially don't? Especially when people talk about these models storing copyrighted data when they truly don't, and that these engineers also fundamentally programmed the models to focus on doing plagiarism, when that's a blatant lie.

    It seems rather that it's you who take for granted that your outdated understanding of the brain is enough as a counter argument. And forget the fact that there are currently no final scientific conclusions as to how our brain works. The difference however, is that I'm writing these arguments out of the latest research in these fields and that's the only foundation to form any kind of argument, especially when we're talking about the context of this topic. What you are personally convinced about when it comes to how the brain works is irrelevant, whoever you've decided to trust in this field is irrelevant. It's the consensus of up to date neuroscience and computer science that should act as the foundation for arguments.

    So, what are you basing your counter arguments on? What exactly is your counter argument?
  • The "AI is theft" debate - An argument


    The definitions of how artists work upon inspirations and other's work is part of the equation, but still dependent on the intention of the artist. The intention isn't built into the AI models, it's the user that forms the intended use and guiding principle of creation.

    So, artist's stealing from other artists is the same as a user prompting the AI model to "steal" in some form. But the production of the text or image by the AI model is not theft in itself, it's merely a function mimicking how the human brain acts upon information (memory) and synthesize that into something new. What is lacking within the system itself is the intention and thus it can't be blamed for theft.

    Which means we can't blame the technicians and engineers for copyright infringement as these AI models don't have "copies" of copyrighted work inside them. An AI model is trained on data (the copyrighted material) and forms a neural network that functions as its memory and foundation of operation, i.e what weights and biases it has for operating.

    So, it's essentially exactly the same as how our brain structure works when it uses our memory that is essentially a neural network formed by raw input data; and through emotional biases and functions synthesize those memories into new forms of ideas and hallucinations. Only through intention do we direct this into forming an intentional creative output, essentially forming something outside of us that we call art.

    It's this difference that gets lost in the debate around AI. People reason that the training data is theft, but how can that be defined as theft? If I have a photographic memory and I read all books in a library, I've essentially formed a neural network within me on the same ground as training a neural network. And thus, there are no copies, there's only a web of connections that remember data.

    If we criminalize remembering data, the formation of a neural network based on data, then a person with photographic memory is just as guilty by merely existing in the world. The difference between training an AI model and a human becomes arbitrary and emotional rather than logical.

    We lose track of which moral actions are actually immoral and blame the wrong people. It's essentially back to the luddites destroying machines during the industrial revolution, but this time, the immoral actions of the users of these AI systems gets a pass and instead the attacks gets directed towards the engineers. Not because they're actually guilty of anything, but rather because of the popularity of hating big tech. It's a polarizing situation in which the rational reasoning gets lost in favor of people forming an identity around whatever groupthink they're in

    And we might lose an enormous benefit to humanity through future AI systems because people don't seem to care to research how these things actually operate and instead just scream out their hate due to their fear of the unknown.

    Society needs to be smarter than this. Artists need to be smarter than this.
  • The "AI is theft" debate - An argument
    The user of the system is accountablejkop

    If the user asks for an intentional plagiarized copy of something, or a derivative output, then yes, the user is the only one accountable as the system does not have intention on its own.

    possibly its programmers as they intentionally instruct the system to process copyright protected content in order to produce a remix. It seems fairly clear, I think, that it's plagiarism and corruption of other people's work.jkop

    But this is still a misunderstanding of the system and how it works. As I've stated in the library example, you are yourself feeding copyrighted material into your own mind that's synthesized into your creative output. Training a system on copyrighted material does not equal copying that material, THAT is a misunderstanding of what a neural system does. It memorize the data in the same way a human memorize data as neural information. You are confusing the "intention" that drives creation, with the underlying physical process.

    There is no fundamental difference between you learning knowledge from a book and these models learning from the same book. And the "remix" is fundamentally the same between how the neural network forms the synthesis and how you form a synthesis. The only difference is the intention, which in both systems, the mind and the AI model, is the human input component.

    So it's not clear at all that it's plagiarism because the description of the system that you did isn't correct about how it functions. And it's this misunderstanding that a neural network and machine learning functions and this mystification about how the human mind works that produces these faulty conclusions.

    If we are to produce laws and regulations for AI, they will need to be based on the most objective truths about how these systems operate. When people make arguments that make arbitrary lines between how artificial neural networks work and neural networks in our brains, then we get into problematic and arbitrary differences that fundamentally spells out an emotional conclusion: "we shouldn't replicate human functions" and the followup question becomes "on what grounds?" Religious? Spiritual? Emotional? Neither which is grounds for laws and regulations.

    What's similar is the way they appear to be creative, but the way they appear is not the way they function.jkop

    That's not what I'm talking about or what this argument is about. The appearance of creativity is not the issue or fundamental part of this. The function; the PHYSICAL function of the system is identical to how our brain functions within the same context. The only thing that's missing is the intention, the driving force in form of the "prompt" or "input request". Within us, our creativity is interlinked with our physical synthesis system and thus it also includes the "prompt"; the "intention" of creation. AI systems however, only has the synthesis system, but that in itself is not breaking any copyrights anymore than our own mind when we experience something and dream, hallucinates and produce ideas. The intention to plagiarize is a decision and that decision is made by a human, that responsibility is made by a human at the point of the "intention" and "prompt", not before it.

    And so, the argument can be made that a person who reads three books and writes a derivative work based on those three books is doing the same as someone who prompts an AI to write derivative work. Where do you put the blame? On reading the three books (training the models), or the intention of writing the derivative work (prompting the AI model to write derivative)?

    If you put the blame on the act of training the models, then you also put blame on reading the books. Then you are essentially saying that the criminal act of theft is conducted by all people all the time they read, see, hear and experience someone else's work. Because that is the same as training these models, it is the same fundamental process.

    But putting blame on that is, of course, absurd. We instead blame the person's intent of writing a derivative piece. We blame the act of wanting to produce that work. And since the AI models doesn't have that intent, you cannot logically put blame on the act of training these models on copyrighted material, because there's nothing in that act that breaks copyright. It's identical to how we humans consume copyrighted material, storing it in neural memory form. And a person with photographic memory excels at this type of neural storage, exactly like these models.

    A machine's iterative computations and growing set of syntactic rules (passed for "learning") are observer-dependent and, as such, very different from a biological observer's ability to form intent and create or discover meanings.jkop

    I'm not sure if you really read my argument closely enough because you keep mixing up intention with the fundamental process and function of information synthesis.

    There are two parts of creation: 1) The accumulation of information within neural memory and its synthesis into a new form. 2) The intention for creation that guides what is formed. One is the source and one is the driving principle for creation out of that source. The source itself is just a massive pool of raw floating data, both in our minds and in these systems. You cannot blame any of that for the creation because that is like blaming our memory for infringing on copyrighted material. It's always the intention that defines if something is plagiarized or derivative. And yes, the intention is a separate act. We constantly produce ideas and visions even without intent. Hallucinations and dreams form without controlled intent. And it's through controlled intent we actually create something outside of us. It is a separate act and part of the whole process.

    And this is the core of my argument. Blaming the process of training AI models as being copyright infringement is looking to be objectively false. It's a fundamental misunderstanding of how the system works, and they seem to more or less come from people's emotional hate for big tech. I'm also critical of big tech, but people will lose much more control over these systems if our laws and regulations get defined by court rulings in which we lose because side of the people made naive emotional arguments about things they fundamentally don't understand. Misunderstandings of the system, of the science of neural memory and functions and how our own minds work.

    I ask, how are these two different?

    A) A person has a perfect photographic memory. They go to the library every day for 20 years and read every book in that library. They then write a short story drawing upon all that has been read and seen in that library during these 20 years.

    B) A tech company let an algorithm read through all books at the same library, which produces a Large Language Model based on that library as its training data. It's then prompted to write a short story and draws upon all it has read and seen through the algorithm.
    Christoffer

    This is the core of this argument. The difference between intention and the formation of the memory core that creation is drawn from and how synthesis occurs. That formation and synthesis in itself does not break copyright laws, but people still call it theft without thinking it through properly.

    Neither man nor machine becomes creative by simulating some observer-dependent appearance of being creative.jkop

    This isn't about creativity. This isn't about wether or not these systems "can be creative". The "creative process" does not imply these systems being able to produce "art". This type of confusion in the debate is what makes people not able to discuss the topic properly. The process itself, the function that simulates the creative process, is not a question of "art" and AI, that's another debate.

    The problem is that people who fight against AI, especially artists fighting against these big tech companies, put blame on the use of copyrighted material in training these models, without understanding that this process is identical to how the human mind is "trained" as well.

    All while many artists, in their own work and processes, directly use other people's work in the formation of their own, but still put some arbitrary line between themselves and the machines when these machines seem to do the same, even though things like diffusion models are fundamentally unable to do produce direct copies in the same way as some of these artists do (as per the concept art example).

    It's a search for a black sheep in the wrong place. Antagonizing the people in control of these systems rather than trying to promote themselves, as artists, to be part of the development and help form a better condition for future artists.
  • The infinite straw person paradox
    I think you meant my comment was written with A.I. and it was in fact. Good looking out, Lionino. I am messing with newly updated Bing Copilot. My post was translated here by me from the help of a.i., but I did not directly copy and paste it from chat.Kizzy

    Rather than have the AI write for you, write your own post and maybe analyze the grammar and structure by the AI as support. While AI functions well att writing, the problem is that you lose or never challenge your own process of thought as writing isn't just outward communication, it's part of your internal processing of ideas.

    It's been proven in studies that the physicality of writing increases the brain activity. It is stronger in writing on paper with a pen as it's the most physical action you can do, but it's also there while writing on a keyboard.

    So letting an AI do that work for you will only lead to you losing something in the purpose of gaining knowledge through discourse. So, write your own stuff and maybe use AI as a research and editing tool, but never as the source of train of thought for an argument. It will make you cognitively numb.
  • Donald Trump (All General Trump Conversations Here)
    What I can't figure out is, what Trump voters think they're voting for.Wayfarer

    They don't know what they're voting for in terms of politics, they aren't educated enough or they are so within their own bubble that they don't have any access to anything but the notion of "us and them". For them, politics is nothing more than a wrestling match. They want a "good guy" that is charismatic and talk like they do. They aren't intelligent enough to comprehend normal politics or understand political ideas; they flock to the emotional; to the experience of shouting in a group against the evil. They're no different from any other who fall to their knees in front of a father figure that can guide them to a better life.

    It is basic religious fundamentalism in its mechanics.

    well, there's some very dangerous forces at work here.Wayfarer

    Just as with any other religious fundamentalism. It's arguably the same mechanics as with the Nazi regime; the mechanics function the same as in any other time in history when there's a charismatic leader (in their eyes and ears), who promise them paradise.

    It either goes two ways; the charisma fades away with the group's fanaticism fading away as generations die off and gets replaced with new holding less fundamentalism in their hearts. Or they grow to a point in which they believe themselves to hold enough power to just, take over. And at that point, if they are stronger than the rest, they will replace people in power and place the rest into fascist obedience. But more likely, if they try it, the rest will rise up and realize they need to push them back, resulting in civil war or major war.

    They would call it revolution, but revolution has a distinct difference in that true positive revolution revolves around standing up against a government that has taken away the power of its citizens, while they strive for being the one taking away the power of the people (in their favor).

    They effectively ends up being a fundamentalist terrorist group who's yet to have initiated violence, not counting the Jan 6 coup attempt and any unreported religious violence onto minorities that has happened without people's knowledge.

    And as such they should be watched in the same way we have eyes on terrorist groups of the world. Especially if Trump loses the election I wouldn't be surprised if we see something far worse than Jan 6 happening. But regardless of how such a violent event plays out, it will spell the end for Maga in that any larger than Jan 6 violence happening will cement their status as a terrorist movement and only the most hardcore Maga folks will keep wearing those hats.

    The "normal" people are basically just waiting for a legitimate reason to remove these people from power. And any political support of violence against US citizens will be such a hard line that any protection these politicians had before such events will be gone and they will be removed by force. If they then try to provoke further violence, well that's when the entire movement will be officially registered a terrorist group.

    If that happen, it would also spell the breakpoint for the republican party. The ones opposing Trump would eject anyone even close to supporting Trump or the Maga movement, or, they'll leave the Republican party as a rotting corpse while they start a new party with the focus on being the real republicans, promising to never let similar "terrorists" into their party. Cleaning their history and washing away any filth stains from it.

    However things go, there will be a fulcrum point that tips things in some direction. But I'm too optimistic to see anything other than the utter collapse of Maga through self-destruction. They're too stupid to function as a revolutionary movement. They're too stupid to uphold any momentum of such actions. They're basically children playing with fire and when they effectively burn the house down they will face the consequences. They shouldn't be underestimated, but we shouldn't overestimate their ability either. They may have guns and explosives, but if violence erupts into insurrection, we have to remember that revolutionary movements in the world and history actually had training and intelligence behind their attempts. These people aren't revolutionary masterminds and within a nation like the US, any revolutionary action would require an extremely intelligent strategy that fools the entire military.

    And if Trump wins, if he tries anything like this himself, I don't think the rest of the government or military will actually listen. If Trump starts to initiate violence against his own citizens, that's gonna be a fast ticket to his downfall.

    The only reason we're not seeing enough pushback at the moment is that Trump and his followers are just barely on the side of democratic rights. But in a nation like the US, any attempt to remove the constitution or demolish the basic fundamentals of how people define the US will result in a strong pushback, maybe even outright violent pushbacks against Trump and his followers.
  • US Election 2024 (All general discussion)
    because if it were implemented, it might work, and it would make Joe Biden look good.Wayfarer

    It's here that the question of democracy becomes muddied. If actions are done not for the people but for the sake of power and winning elections, then there's no true representative democracy anymore, but a pseudo-democracy.

    The need to simplify everything down to calling pseudo-democracies real democracies because people seem to be unable to understand what is and what isn't a true democracy makes it impossible to progress past the problems of these kinds of pseudo-democracies.

    The US is just a patch work of a democracy, barely on the side of being for the people, mostly just operating under similar ideals as religious fundamentalistic nations around the world; probably the only nation working under Christian fundamentalism in the world, and it infects their democracy and produces demagogues and pseudo-democratic practices.
  • Health
    What counts as ultra-processed?Mikie

    https://health.clevelandclinic.org/ultra-processed-foods

    An example list by ChatGPT:

    Chicken nuggets
    Frozen meals
    Hot dogs
    Packaged soups
    Potato chips
    Soft drinks
    Sweetened breakfast cereals
    Packaged bread and buns
    Industrial pastries and cakes
    Pre-packaged pies and pasta dishes
    Margarine and spreads
    Ice cream and dairy-based desserts
    Processed cheese products
    Flavored milk drinks
    Instant noodles and soups
    Processed meats such as sausages, salami, and bacon
    Microwave popcorn
    Store-bought cookies and biscuits
    Candy bars
    Artificially sweetened beverages
    Flavored yogurts high in sugar
    Ready-to-eat snacks like pretzels and flavored crackers
  • Health


    I avoid ultra processed food and take care with good quality sources for the food I eat.

    England is a perfect example of what happens when ultra processed food has been mainstreamed so hard that it starts to kill its citizens. Avoid ultra processed food at all costs.
  • “That’s not an argument”
    An argument is the presenting of reasons/evidence for a claim or conclusion. Really that simple.Mikie

    I rarely see people actually doing this in here. It's more common than in other places online, but it's still mostly anecdotal and emotional reasoning and when questioned they tantrum because what they were so convinced of in their own mind ended up going through normal scrutiny.

    I think there's a point in not just focusing on making a proper argument. In order for a philosophical discussion to take place, people need to abandon their emotions about their argument and treat it as someone else's argument. Detachement to ones ideas is the only way to not fall into bias and fallacy.

    So, start out with the argument, and then treat any following discussion in which people object to it as you yourself being part of them discussing someone else's idea. As soon as the argument is presented, don't act as if you own it or else you will start protecting it with your life.

    I'm often calling out fallacies and biases, but it's because they're so common among people who aren't well versed in how to rationally treat their own convictions with detachement. A core tenet in philosophy is to question yourself as part of the scrutiny of a formed idea, but most of the time people are just planting their concepts and ideas as flags on a battlefield before going to war for that flag.

    But I agree that some are sloppy in their use of calling out fallacies and biases. Many call out fallacies that aren't fallacies, lacking knowledge of what certain fallacies really are and just wave it around as a shield to any form of scrutiny. But generally, the same people doing that are also the ones committing most of the fallacies themselves.

    The main problem on this forum is rather that when people create their arguments, they aren't actually presenting any evidence or rational logic behind their reasoning. They cook up whatever they believe is evidence and then try to demand it be enough to prove their point, ending up going in circles saying "I've already presented the evidence".

    Generally, a majority of people do not have the necessary knowledge of how to make actual arguments or how to decode arguments. So most people will just go around in circles, failing to grow their knowledge even in a place dedicated to grow knowledge.
  • Donald Trump (All General Trump Conversations Here)
    rumpism is an authoritarian[a] political movement that follows the political ideologies associated with Donald Trump and his political base[32][33] incorporating ideologies such as right-wing populism, national conservatism, neo-nationalism, and neo-fascism.b] Trumpist rhetoric heavily features anti-immigrant,[43] xenophobic,[44] nativist,[45] and racist attacks against minority groups.[46][47] Other identified aspects include conspiracist,[48][49] isolationist,[45][50] Christian nationalist,[51] protectionist,[52][53] anti-feminist,[17][13] and anti-LGBT[54] beliefs. Trumpists and Trumpians are terms that refer to individuals exhibiting its characteristics.Benkei

    Careful so you don't step on someone's free speech by labeling them as something they say they definitely aren't while some apologist calls you out for calling them stupid racists rather than trying to bridge the societal gaps by giving them the intellectual respect they themselves demand to deserve.
  • Are all living things conscious?
    Isn't your title "Are all living things conscious?" a question? And isn't my answer congruent with it?Alkis Piskas

    Are you conscious and aware of the fact that I didn't create this thread and that the title question isn't mine? :sweat:
  • Are all living things conscious?
    Yes, all. Including organisms and plants. They all perceive and react to their environment. Because they all want to survive. And multiply.Alkis Piskas

    Not sure why you quoted me with the title of the thread, but consciousness require awareness. It doesn't require self-awareness, but awareness of the processes that occurs to them and reactions by them. A rock isn't measurably aware of the hammer hitting it, a bug is.

    But I still don't know what you are actually answering to or why you quoted the thread's title as if I asked it?
  • Donald Trump (All General Trump Conversations Here)
    I'm not as concerned with Trump winning the election as I am with him losing the vote count. He has been busy keeping his MAGA base stirred up, and if he loses he might incite them to disrupt procedures at the Capital and overturn the presidency by force of arms, not simply wandering the Halls of Congress.jgill

    So all the potential consequences with him in power during a time of extreme global unrest due to both Russia and China is not a larger concern than some backwards MAGA cult members mounting a real attack that would quickly be fought back and at the same time cement the need to reshape politics into a form that prevents things like this to ever happen again?

    Trump is right about one thing with his "bloodbath" rhetoric though; if he and his followers take things too far, it will tip the scales of society's tolerance of them so far into the negative that they will be branded a terrorist group and if someone wears a MAGA hat it won't end well. Most of these people are gullible idiots, but if consequences for affiliation with MAGA becomes too negative, they will quickly break down into very obscure smaller groups of fanatics.

    I can't see how any of this would end well for Trump, his closest people and his followers. With luck, everything fizzles out over the years, but if Trump and his followers take things too far, then they will quickly realize that there are far more people on the good side who won't tolerate this bullshit.
  • Climate change denial
    Oh if only I could find the right way to talk. 'Crisis' good, 'catastrophe' bad; 'tipping point' good, 'point of no return' bad; 'Houston we have a problem', good, 'The rocket has exploded' bad.

    The main thing is to get the talk nuanced just so, and then everyone will act and no one will despair. Or possibly not.
    unenlightened

    Language matter, especially in media headlines for the part of the masses who are stupid enough to only read the headlines; but who carry enough democratic power to vote people into power who actively act against mitigation strategies.

    Modern capitalism has pushed media in many nations to compete in the attention economy of who can write in the most bold, underlined ALL CAPS text ending with the most exclamation marks; for the purpose of reaching the absolute most extreme eye catching DOOM rhetoric possible.

    Ignoring how such media behavior affect the population who aren't intellectual enough to do anything but follow the most shallow interpretation of reality is to ignore how group think and cult mentality shape and form the upper most deciding factors of democratic elections.

    Today, almost every election balances right at the mid point between two sides and elections become essentially decided by a very small group of people who are pushed and pulled by people in power using any kind of algorithmic weapon they can muster.

    In the end, the intellectual and educated masses stand firm on each side in an election and has to hope that their side had the highest marketing budget to sway that sheep herd in the middle towards their direction.

    If anyone calls that kind of "democracy" our peak of society and spearhead of civilisation, they're delusional. Democracy today is just a sports game of sheep herding into winning and gaining power for the next four years. It's not about what's good for society or about solutions to problems.

    So, language matter; language can sway that middle herd towards or away from mitigation strategies. But since commercial media isn't playing a game of morality or truth, but rather profit, the truth gets pushed to the small fine writing underneath the profit-gaining headlines, and the headlines always focus on doom, it's what sells the most ads and grabs the most attention, and attention is today's most valuable currency, more precious than saving the world.

    Narcissus gazing into his reflection in the water; so mesmerized that he can't hear the deadly tsunami up the river.
  • Climate change denial


    Yes, that's my point, the threat is real, the science is real, but the language used in media play into an ideal of everything being too late, when it's not. Or rather, the complexity gets lost and the doomer climate science deniers just point towards singular words as sources and reasons for their cause.

    We already had to change using the term "climate change" to "climate crisis" as deniers leaned into arguments about the "change" having happened before in earth's history and there's no proof for human actions being the reason. So changing it to "climate crisis" have helped push back against those kinds of stupid arguments from them. And now when most of them have shifted into acknowledging human causes, but changed the narrative into that we're doomed so there's no point in changing, then we need to adjust the language to push back against that kind of doomer rhetoric.

    Maybe push terms like "mitigation efforts" and "mitigation strategies" into the mainstream in order to push the concept that it's not too late and there's still time to do stuff. That way the debate instead goes into a debate against deniers and doomers with the frame of reference being a question of "why would you oppose mitigating the effects of this crisis?"

    Change in language works best on people who can't understand information on their own and who instead rely on other authorities to form opinions (authority in terms of group think clusters and populistic influencers pushing their agendas rather than upholding facts).

    Since we have the problem of these people having enough democratic power to push elections in the direction of leaders who would halt mitigation strategies, then the only democratic strategy to use is rhetoric to persuade them.

    The other option is for UN to declare a form of global marshal law on the topic of climate change and that no democratic nation can oppose or work against global mitigation strategies. But I doubt UN can have enough power to shift anything through that.
  • Climate change denial
    I think one error that media and climate scientists make when trying to communicate the problems is to use terms like "point of no return". I think this has been negatively helping the legitimizing of the shifting goal posts for climate science denier's "doomer stance" of "yes, the climate is shifting and yes we might be responsible but there's no point in doing anything since we're already doomed".

    If we can leave out terms like "point of no return", we won't play into their newest but equally stupid position against mitigation projects.
  • US Election 2024 (All general discussion)
    So if an independent candidate DID win (it's a thought-experiment, not an actual prediction) he or she would have to turn to the Democrats because the Republicans can't manage a piss-up in a brewery.Wayfarer

    I don't think any independent candidate would win, but they would split the votes so much if there were three options available that the democrats would win simply by the lack of enough votes on either side of the Republicans.

    However, if, by some miracle, a stable Republican outlier wins instead as an independent, I think that she would gather everyone siding with the Lincoln project and build up a proper party through them. And they might even push out many of the MAGA cult members infesting the other halls of power in congress over time.

    Regardless, I think the only way out and away from Trumpism is to have an independent option during election. Too many Republicans who hate Trump hate the Democrats more and they would vote for the independent voice and drag all the ones who's opting out entirely. It would divide the Republicans, but the smart ones would know it's their only option forward as the MAGA cult could very well spell the end for the Republican party as a whole. Soon or later the normal Republicans will have to take some home cleaning action. It's like they've been infested by cockroaches and have given up trying to solve the issue, but if they grow into too much of a problem they will have to start stomping them out and call exterminators.
  • US Election 2024 (All general discussion)
    which party would she be more likely to be able to negotiate policies with, in light of the dysfunction that characterises the MAGA-GOP?Wayfarer

    I'm not entirely sure how the details of these things go, but wouldn't she align with the Lincon Project and draw together the Republicans who don't want to be part of the MAGA cult?

    Would it be so bold as to predict that at some point, the Republican party will split and the new faction will be called "New Republicans" or something like "True Republicans" or similar? Gathering momentum among normal people who usually vote Republican. That they would acknowledge that it's problematic to gain traction at this time in history, but that their goal is to build up a sense of trust that voters will get a stable Republican party by voting for them and their internal goal is to clean house and rid themselves of any MAGA supporters. That way, the MAGA cult will probably soon evaporate since they cannot get enough traction by numbers alone and the gullible cult folks who soon get tired of not being represented will move on and just vote for the new republican party while the core MAGA cult will just gather together in some remote location and shoot beer cans or whatever mindless trash they find meaningful.
  • Climate change denial


    Yes, that's a good video on what's going on right now. Maybe if the people who can still use their brains could stop focusing on spending their time on so much trash culture and lazy attitudes towards politics and philosophical thought; they might be able to help change the course instead. But people aren't interested, even if they're on the right side of history.

    The problem isn't really the climate deniers or the climate doomers, they're mostly just irrelevant since they're not nearly enough of a democratic force to stand in the way of necessary change. Or, that's how it should be at least. The problem is that democracies are tilted to such an extreme balance between decent and absolute trash that they've become relevant without really being a large democratic force; all because the rest of society consist of lazy people who "can't find the time to involve themselves in these issues".

    It's this lazy attitude, this "I don't have time to think about..." that is the real problem. There's not enough demand on politicians and parties, so politicians fall back into playing into populism in order to keep their power.

    People who acknowledge the problem and agree with the need for solutions, still just don't give a shit about voting for those who actually push for necessary change and they don't seem to care to speak up when necessary.

    This is why shifting the social sphere into climate denying and doomerism should be considered immoral. Something equivalent of being a racist, spitting on the poor, abusive behavior etc. Society needs to change towards treating people who talk and act within such attitudes to be unwelcomed, totally ok to be fired from jobs, kicked out of restaurants, unwanted in social situations etc. And if someone would talk like that in media it should be equivalent of uttering the n-word in public; not as an opinion that's treated equal to everything else.

    If so, if pushed in that direction of social culture in society, it would gather a greater momentum towards action. It would lead to politicians being careful not to cater to such voices and the social consequences would be too severe for people to go around shouting such opinions and statements.

    Since the consequences of not doing anything to mitigate climate chance are so far away in time, we need to have consequences here and now that people want to avoid. Producing a culture of more severe negative social consequences as direct results of promoting or uttering climate denial and doomerism would help change the lazy attitude into being more active and proactive. It would force people to be more verbal in order to keep their social moral status and in doing so keep the focus on working towards solutions higher on the list for politicians as it's part of the cultural atmosphere they want to cater to in order to gain votes.

    Right now, people who, in social situations, talk a lot about climate change and the required need for solutions are often viewed as "bad at parties", while people who are deniers or doomers just get eyeroll reactions. That makes the issues and the topic dead in politics and something left for Reddit brawls, rather than part of core societal topics. Forcing a harsher moral environment around the topic could push people to "show their moral stance" more openly since they surely don't want to be viewed as possible deniers or doomers.

    If people can't take actions on their own, then make it customary immoral not to. The sad truth is that status and social structures are more important for common people than saving the planet. And so shaping a social construct of morality around the subject into being more extreme could help steer the ship in a better direction faster.
  • Migrating to England
    Why not a Scandinavian country instead? If you want a better and working socialistic environment, then England doesn't seem like the best choice?
  • Sound great but they are wrong!!!


    "It's freedom of speech"
  • Bowling Alone
    I’m old enough to see it in my own life. It’s not only technology but a decline of spirituality — one aspect being religion. So in a sense one major contributing factor is a change in philosophy.

    An interesting example is looking at the arts — movies, television, music. Compare Woodstock 1969 to Woodstock 1999. That alone says it all.
    Mikie

    I think the major part has to do with disconnection to others. With technology and internet we've increased our ability to communicate, but we are disconnected to the physical form of communication. There's tons of studies on the importance of physical connection, being in the room with other people. It's been something very much experienced coming out of the pandemic, how mental health drastically improves as soon as people started physically seeing each other again.

    We're blasted by information in our alone time, and the information is "dead". Like this text, like all text on this forum, it is a dead representation of who the people writing here are as a whole. If we all gathered and met up, the discussions would look very different, but it would also have a dimension of emotion that isn't seen online. Respect is higher when facing each other talking.

    So it's not really about just "meeting up", it's about the quality of interaction that is lost online. Humans are built to interact through micro-expressions, body language, tonality in voice. While we don't need it all the time, the dominance of online communication over the physical have led to a change in behavior.

    Together with the focus on individuality, the neoliberal ideology of the self, it has skewed the perspective people have of their ego.

    What we need more today than ever is social groups not just meeting, but building something together within the physical realm. A step back from the individual perspective, the focus on the ego and into a collective realm in which social groups build something together.

    There's no surprise that there's been a rise in isolated groups over the years; stronger polarization between different ideas and ideologies. The lack of a sense of collective as a society has pushed people into other forms of gatherings and without careful guidance formed into destructive ones like MAGA, incel communities, ethnic groups divided away from multicultural collaboration and into hostility against other groups etc We've even seen it in the extreme ways that political parties have generated followers that are less open to actual politics in which a party in parlaments collaborate with a "give and take" structure towards other opposing parties. Previously, political parties collaborated all over the spectrum with the intent of representing their voters wills and needs in the halls of power. Now, they only try to play a political game without any real vision and in closed rooms, scheming stronger ill-willed strategies against other opposing political parties; everything is about sabotaging others party politics than a give and take strategy for progress and problem solving.

    What all of this shows is that while the neoliberal individuality have focused on the ego, that ego still craves the social realm, but with a lack of a collective dimension it clusters together with whatever rhymes with that specific ego and the group behaves outwards with hostility as the single individual ego does at its core.

    It also shows that the individual craves something to be passionate about, and without a larger collective vision, they can only turn to these minor ones and double down on them. As you mentioned, the decline in spirituality and religion has created a void in the larger collective sense.

    But I believe that the solution should be to have something that connects people within the context of a larger collective aspiration. We need a form of goal for humanity as a whole. Something that feels like we're heading somewhere, without necessarily having to do with religion. We need something that people feel is something we build together and can collaborate within.

    The major obstacles is that the largest policies today are controlled by corporations who's interest is in profit. That's nothing that can be collectively gathered around.

    We therefore need a shift towards a collective goal that we can all build together. Something we can all believe in is the right path forward for humanity. Something that gathers people across borders and breaking away from capitalistic profit seeking.

    One such project, I would say, is building a new form of living that mitigates climate change. That project demands collaboration from all people and a dismantle of the selfish individuality that is toxic to us. Figuring that out requires innovations, engineers, philosophers, builders, collaboration across industries and different people. Across borders, nationalities and ethnicities.

    It could be such a projects that we collectively gather around to achieve together, but for that we need to remove power from those holding us back from doing so. We need to stop being careless with who we vote for, who we support and what industries we give our money to. We need to stop our cognitively biased rhetoric that's only there to hide our laziness and find a goal and vision for a future with this problem fixed and work towards it, gathering people around us for it.

    When you look at Woodstock 69 as you mentioned, the one thing that is key is that they were a social group who focused on the good of the collective rather than the expression of the individual. And the LSD helped with a lot of ego death that infused such mentality even further.

    We need more ego death, more of a large collective goal or spirit (without it having to be religion), and we need projects and visions as a collective to focus on. Solving world problems and returning to a sense of collective achievements, like returning to looking up at the stars and dreaming of new human achievements in exploration and reaching new heights as a species.

    People underestimate the importance of such dreams. The constant complain against those who dream of things like space exploration is that we should focus on stuff like ending poverty instead, but they're missing the point of what such dreams do to us. Let's say we end poverty.... and then what? The emptiness of just existing without a sense of purpose kills people far more than a lack of food, because it kills the mind and makes us empty bodies; husks mindlessly moving around, acting out confused emotions in the lack of a path forward.

    We need to dream collectively, we need to collaborate more as a collective and we need to kill the ego. That's the difference between Woodstock 69 and 99.
  • Climate change denial
    And of course... republicans. Can we actually just conclude them to be collectively stupid? Like, what more evidence do we need?

  • Climate change denial
    The problem is that any breakdown in civil order would inevitably disrupt commerce and turn politics more authoritarian.Punshhh

    It's primarily industries that needs to be changed by force. Regular people will surely hate the consequences of the industry changes, but new industries will pop up that can follow the new path long before people start to vote for dictators. Like, the least they can do is to tax carbon emissions, and do it a lot. Then use the money as direct funds towards engineering solutions for mitigation.

    Programmes of education to educate the population in the severity and pressing nature of the threat would be effective in spreading the word.Punshhh

    Education doesn't seem to help much for those last percentages of people who are enough to screw up elections with candidates who oppose green industries.

    You say we are able to make the necessary changes and prevent catastrophe. But I would say it is too late nowPunshhh

    It's too late for some consequences, but giving up would be far more catastrophic. There's no point in just stop mitigation. But we have to speed up the change and do it fast.

    It looks as though the transition to carbon neutral transport is not going to be rolled out in time and may fail, with either a move back to oil, or a collapse of transport systems.Punshhh

    Moving back into oil just to see the entire world collapse is just stupid as a strategy. Just burn the oil industry (not literally). Crack down on the corrupted politicians getting money from it, do it by force if needed. Block oil entirely or partially (to have transportation for the build up of green replacements.

    People buy what is on the market, so remove oil-driven products from the market. Have the governments put a ban on new gas cars earlier than we have now. If they bitch about it and try something as a blow back, put them in jail.

    It's basically war against the climate change consequences and there's traitors walking about.

    The rest of the world would be cut loose and would have to fend for themselves.Punshhh

    Billions against a fortress? Politicians in high places will soon enough be toppled if that would ever happen. Desperation force people into the only option they have, and getting into revolutionary mode à la France can move mountains.

    I just wish people would argue for more serious push against the oil industry than has been done so far. There are just too many politicians in the pockets of the global oil industry, and the politicians who are directly owning their part of the oil industry need to be starved out of power. Like, Russia should be totally isolated. China should be totally isolated. With the only key to the door being that they stop oil. If not, they can hunger until the people storm the leader's castles.
  • Climate change denial
    But there is an enormous inertia in the system and the culture. Many of us are banging our heads against this wall of inertia.Punshhh

    Yes, the system itself is the problem and people rely on the system too much. For this to be fixed, we need to break the system, even if that has to be healed afterwards. The consequences of breaking its stability will be far less than that of taking too long to change course.

    Eventually one realises all we can do is play our part from the position we are in within society. Ideally one would become a politician run for office and change things. Or figure out a way to change peoples minds through some kind of media organisation, or protest group. But again the inertia hits home and many people are already doing these things. In fact some of these people are pushing so hard that media campaigns are growing to discredit them as extremists and pull more people into climate denial.Punshhh

    Activists in this are just as much morons as the deniers and oil chills. It's the other extreme end featuring people who can't handle the psychological stress, so they act out in desperation rather than rationality.

    But the problem is that there's no time to play along as usual. I'm serious here, the largest contributors to emissions need to be put into such pressure that they collapse as economies if they don't change course. China for instance, is the worlds largest contributor to emissions. Their economy need to be crippled to the point they accept they have to change. And so does every nation who does so globally. The global economy will crash because of this, but it has to be done as money is the only thing that moves this world. The problem is that politicians in the world do not take action, they try to eat the cake and have it to, they can't turn their backs on voters who are deniers and who don't care about climate change, so they play it down; they do the absolute minimum required by COP and COP itself only arrive at minimal conclusions that scientists are criticizing being too little each time they gather.

    If people think that such breaking of the system would lead to conflicts and war, yes, it might. But imagine a world with billions of people relocated and battling for resources during famine and societies in need to rebuild their infrastructure and housing due to shifting environmental needs. What wars would that generate?

    What we as regular people can do is as I said, view all people who don't take this seriously as immoral people due to them downplaying the seriousness. There's a big difference between viewing them as having the wrong opinion and seeing them as immoral. The change produces a social change. The problem is that the realm and dimension of the consequences of passive behavior isn't communicated. The seriousness of collective passiveness is downplayed. If the link between being passive or dismissive of the problems and the consequences a few decades from now are established, then it would be easier to view this passivity as being immoral and these people as being immoral.

    But we are still acting like it's just an opinion, like it's a behavior that's fine. It's like in the 40s and 50s and it's fine to be a racist. It's fine to divide people by color. And at some point it's not fine anymore and if you express racist opinions or behave like that in public you'll get punched in the face and people would cheer that on. That's the level of social behavior we need to be at in order to seriously pressure politicians and the public opinion. And even then it would be hard, seen as there's plenty of politicians who still win elections with downright outspoken racism. Even today that happens, but at least that power usually can't survive long if the social ideal is to punch a racist.

    So one reaches a point of acceptance, an acceptance that the crisis is enormous and irreversible and we as a species are to weak to prevent it. This is quite normal, the list of species extinctions in the fossil record is long and there is an inevitability to it.Punshhh

    We are not to weak to prevent it, we just need to do what it takes. When the pressure is on, people won't be weak, they will fight and kill for change. That's where we're heading if we're not acting now.

    It is too late now to overcome this current cycle of climate change, however if some portion of humanity can survive, adapt and preserve our intellectual and technological achievements sufficiently that they can be conveyed to the next flourishing of civilisation. There is an increased chance of achieving a that custodial role.Punshhh

    Or just change course now. If that's our future and people would start to realize this to be a very likely outcome, then they will pick up guns and remove anyone who do not actively work to fix it. It's easy to ignore it now, but when enough people get the short end of the stick, they will soon organize and do something. We might see billions of them. Billions who have nothing else they can do but storm the castles of immorality.


    I think the way you describe it is how many people view things, especially in places that may seem to be out of danger. But people don't realize that there is no such place. The changing climate collapse ecosystems and produce a cocktail effect of consequences, many unpredictable as we've witnessed already. This increase will more than likely happen in our lifetime. If people care for their children, then what future are they giving them? Putting blind folds on the kids, trying to soothe them into a belief that everything will be fine and then kicking them out into a world that is breaking apart?

    Adults today are so inactive and passive that young teenagers have essentially given up. The depression around this subject among young people is so severe and their parents just don't seem to give a shit. It's appalling in my opinion.

    And I actually don't see most people actually accepting how serious this problem is, or rather, they don't seem to accept just how serious this can become. I see most regular people as ignorant, putting on the blind folds and distracting themselves with mindless instagram reels. Essentially they have their head in the sand until the hurricane winds rip their bodies from their stuck heads. If they actually understood, they would speak more openly about it, but they don't, because it's socially awkward to do so, it's socially awkward to be angry about how things are. Changing that would make things go faster.

    And one such change would be to draw a clear moral line between the active and the passive person. If the deniers and passive people are considered immoral, then people will start to express themselves much more on the matter. People will find it much less awkward to socially be outspoken about the issues. People will find it is moral to talk about solutions, to have it as a conversation starter.

    People aren't talking right now, they are quiet.

    How much further, how much does it take in order for regular people to stop voting for politicians who downplay the problems? How much further does it have to go in order for people to put pressure on world politics? How much further does it have to go in order for people to start talking about the issues much more openly?

    I suspect that when the first bullet is fired from a guerilla or resistance group fighting for a piece of land because their own nation is uninhabitable; then people will realize just how dire the situation is. Then it would be such an illogical thing to say "go back to your own country" because they can't, and the number of people and military groups born out of such desperation will grow, and grow, and grow. And they will creep closer and closer and closer to the comfort of people's homes. Then, maybe, regular people will start to get the fucking point on how serious this thing is.
  • Climate change denial
    There may be a handful of sceptics who genuinely don’t accept the science. But they will fade away soon as the climactic impacts start to be felt.Punshhh

    A handfull enough to sway politics in favor of populist leaders who keep the necessary mitigation from happening in time.

    The impacts of climate change will change their minds soon enoughPunshhh

    Yes, as I said, that is the scenario if we fail to do something now. Or, we do what's necessary to not let millions die.

    If it is immoral to let millions die and put the world into a economical and relocation crisis of historical proportions due to inaction, would it be more immoral to take away people's voting rights if they deny that actions need to be taken? Is that a level of cost to the world worth keeping their democratic votes, or is that just a good example of why Kantian ethics aren't enough to make moral sense in all situations? I sense that such things get people's blood pumping into slippery slope scenarios of totalitarian governments, but no, it's about one thing and one thing alone; getting on the right path to avoid disaster and in a battle for the health of the world and all people, that's requires a level of martial law as it is a war against inaction. Any industry that does not have a strategy or plan to change course will lose their execs and board, any politicians who don't have a serious plan for changing a nation's course in time will be removed from power. The blame cannot be put onto the people as the people can only follow how society is structured. The only blame they can get is for who they put into power and everyone needs to be prepared for major economical turmoil as assets are relocated into solutions from the current non-solutions.

    As scientists are witnessing more and more actual consequences of climate change, it is clear that the consequences are very underestimated. If this continues we actually don't know how severe it can get. An eco-system can absolutely survive, but in what state? Losing algae in the sea would produce another tipping point. And with collapses of certain groups of species it can lead to new forms of pathogens and invasive species that could cause new pandemics and a massive famine on a scale never before seen.

    I don't think people really realize how delicate the balance of the world is. The economy is a good analogy for it. The most minor problem can cause extreme fluctuations of the global economical balance. The war in Ukraine and subsequent blockage of gas from Russia caused an energy crisis, which helped pushed us into a big inflationary spiral. The blockade of the Suez Canal alone was able to put the entire world into economical fragility. But it was the sum of the Ukraine war, the pandemic, the blockage, the energy crisis, the Chinese/Taiwan unrest that put the global economy into turmoil. Put into terms of the world's ecological balance and temperature, people underestimate what the change does to the planet. It's like people only think that the sea will rise and the warmest parts of the world will get slightly warmer. In Scandinavia, some people think it will be nice to grow more wine as the region gets warmer, like what the hell are they talking about? It's like people have an inability to actually extrapolate a logical overview of the consequences. If even scientists underestimate the damage, or by fear of being attacked by the idiots of society if they look like alarmists; then just imagine how bad the general population is at accurate predictions of the level of damage we face.

    In my view, rip the fucking band-aid and then we can heal the world from that. It's much easier for everyone than trying to heal from a broken world.
  • Climate change denial
    Because you're sticking to your old guns.baker

    You only know the things I write here, you know nothing else. But you act upon such lack of knowledge and perform judgement. This is just a dishonest attempt at framing the other in a discussion as a form of ad hominem.

    I'm not criticizing you for being rude or mean, I'm criticizing you for being ineffective. Because I want you to be effective.

    You have some really strange ideas about my intentions here.
    baker

    And you are too vague about your intentions as well as framing it in very odd rhetoric.

    Once again, you do not know anything other than what you read of me here. As I've pointed out many times now; if there are deniers, there's no point in trying to convince them as they are acting through a cult mentality. You cannot convince them as long as they are deeply rooted within their community of denial.

    So what efficiency are you talking about? Being efficient in achieving what exactly?

    For me, a question like, "How do you talk to someone who thinks that mankind will adapt to whatever comes, when it comes; so that this person will change their mind and act differently, more in line with planet preservation?" makes perfect sense, to you, it clearly doesn't.baker

    And it's been done to death. How much more education do these people need? The denial group have slowly started to go into just acceptance of a changing world, but they do so in the context of not acting anyway. The outcome of their reasoning is the same as their previous pure denial.

    If they are unable to understand that mitigation is still necessary so as to not completely ruin everything and that not acting will cause millions of deaths as a result, then they haven't really been convinced, they have only moved their goal posts of their denial towards a new position of defending their inaction and ignorance.

    Why should the world cater to these people? Why should we continue wasting time trying to convince them and not just move on with the debate towards what solutions will work best?

    But is being harsh to those people leading to the result you want, namely, an improved state of the planet?baker

    By ignoring them and implementing changes to society anyway, yes, we will save millions and mitigate the worst damages. There's no time to build public opinion through convincing these people, it will be too late. The strategies need to circumvent slow progress, the damage of such rapid progress will be microscopic against the consequence of not doing so.

    Treating these people as immoral is not an act of entitlement, it is an act of building a collective sense of morality that can drive changes in society. If it is considered moral to support actions taken to mitigate climate change and immoral not to, then it will use social structures to form public opinion rather than being dependent on uneducated or people unequipped to understand complex knowledge.

    Structural racism have rarely been fought through educating racists to not support such structural racism; that does not work until they've instinctually already left the racist mindset. Instead it has been a moral dimension that's been most effective transforming society. Reshaping the idea of dividing people into being an immoral act at its core. Then, people don't have to understand any complex knowledge about a subject, they just have to accept the more instinctually programmed moral codes in the social structures they exist in. That's why I don't just call them uneducated, idiots or conspiratorial cultists, but also immoral people who support a destructive movement through inaction or active action against mitigation efforts.

    View them as immoral people, just like racists, abusers and other immoral people. Don't act like they're just expressing some opinions that have some balanced value, because there's no such balance. It's like saying that a racist statement is just as morally acceptable as someone making a statement about love. It's not. Making statements that push public opinion towards ignorance about climate change is an immoral act that can with enough collective public drive cause delays that will kill millions. It is pushing dominos in a direction of pure horror and that is simply an immoral act.

    I think it should still be possible to talk to such people in ways that will get through them.
    It might just take more creativity and effort, and inventing new strategies.
    baker

    You don't think this has been done for decades now? There's no time left to keep doing this. If we had 50 years more to slowly change people's mind, yes, but just look at how far anti-racism has gotten. Shouldn't we've been freed of such idiocy by now? Aren't we educated enough by now to understand how immoral and stupid racism is? We still have major problems with that and more education and trying to convince racists does not help. The only thing that helps is to shut them up and make policies against racism.

    If you have some idea that hasn't been tried to death before regarding convincing these people, then everyone's listening. But there's no practical value to just pointing out that there "should be some way to convince them". If we are to act now, then the solution is to just ignore them, make policies regardless of their opinions and just shut them up. There's simply no time educating these adult children.

    How are you going to "just do what's needed"? By abolishing democracy?baker

    When it comes to the issues with climate change, it has nothing to do with abolishing democracy. It's not a question of opinion or idealism, it is a fact of our world's reality and a fact that points in a certain problematic direction for everyone. Everyone, globally, should take action towards mitigating climate change and stop listening to these immoral people. That is not the same as abolishing democracy.

    Like, if there was a comet coming towards us and the entire world economy and all nations need to act together to solve it fast. Would you leave that up to democracy? To be debated? To try and convince idiots that the problem is real? No, all nations would just move towards solutions like if they had a giant bulldozer. They would run the idiots over and everyone who understands the dangers would cheer it on.

    I think they just fight against having their minds changed by the strategies used so far. Other strategies might yield better results.baker

    The people we are talking about are not discussing the most effective strategies, they are opposing what would be minor inconveniences in their lives. The only ones equipped to really decide the best strategies are the actual scientists, experts and engineers working to solve the problems. Regular people should shut up and listen to these experts. Politicians should shut up and listen to the solutions. The moral dimension around the subject need to become more clear to the public.

    As an example, I used to work as a mathematics tutor. A highschool student came in to be tutored about linear functions. This was her last chance; if she would fail the next test, she would be expelled from school. The situation was dire. She was first tutored by an older tutor, I witnessed some of their sessions. It was clear right away that the student didn't have a grasp on fractions and rules for solving equations. Without mastering those basic things, it's impossible to do linear functions. But the old tutor insisted on working on linear functions with the student. They made no progress and he gave up on her, declaring her to be a hopeless case. The student was then assigned to me. We spent the summer learning fractions and basic rules for equations, things she should have mastered years ago. She passed the test, completed her education, even earned and master's degree.

    Your attitude is that of a teacher; a teacher's goal is to teach. My attitude is that of a tutor; a tutor's goal is to get the student to learn the subject matter, (almost) no matter what it takes.
    baker

    Failing that education would not result in potentially millions of deaths and extremely changed living conditions of the entire global population. Sorry, but this analogy does not work for the subject of climate change as it does not have a moral consequence and the dimension of time to avoid such a consequence. Class is over, the semester is done, action needs to be taken.

    There you go, outsourcing responsibility again.baker

    How am I outsourcing responsibility when I point in the direction of the one's who are actually responsible to take the actions needed? What responsibility are you suggesting me to have and take? Isn't it responsible to also push for actions taken now and not caring for the saboteurs working against these necessary actions? Isn't it taking responsibility to try and push for a moral realm of thinking around this subject and abandoning the idea that this is some debate of ideals?

    What type of responsibility is valid in your book? Considering the urgent time for action and the lack of time to educate people actively giving experts the finger? Did that student give you the finger when you tried to help her? No, because your analogy is about people wanting to be educated, it's nowhere near the reality of this subject matter. I'm all for education, but we don't have time to educate people in order for them to support solutions to a damn comet on collision course with earth. In such a situation you simply ignore the ignorant and take the necessary action that is needed right now.
  • Climate change denial
    The statement "We ought let the tides rise if it means preservation of our current capitalistic economic models and structures" is the moral claim. To deny that claim is to take an anti-capitalistic stance. This is where the debate actually lies. It's a battle over economic policy, not over science.Hanover

    Since there's no debate about the science as the science is clear, why would it therefor be about economic policy as a form of a capitalist/anti-capitalist dichotomy? Isn't it more or less a question of morality? I.e what's the moral action for us to collectively take? With the right strategy and effort, the damage of rapid policy change would be microscopic against the reality of not doing anything to mitigate climate change.

    I don't see how there should be any debate other than about what's the best mitigation strategy. The debate should be about which actions are the best, and how to incorporate them into society in the best way. Like, new green industries that not only mitigate climate change, but also generate new jobs. How high carbon taxes would push industry people who are only interested in their balance sheets and bank accounts to actually change towards mitigation. When and where nuclear energy is better than solar, wind and sea and so on...

    If the debate centers around the science, then it's pointless. If it centers around some capitalist/anti-capitalist political debate, then it's also pointless. In this subject matter the science is real and proven and economic philosophy just lead to navel-gazing about people's preferred world views. Rather, the position climate change should be about at this time in history should be about the strategies and mitigation solutions and how to practically implement them into society in a smart way. Everything else is just pointless and every denier should just be ignored just as much as they ignore the severity of the subject.
  • Climate change denial
    How do you plan to do that?baker

    Through politicians actually doing what it takes instead of acting like demagogues only worried about losing votes in the next election.

    Can't you see what you're doing? You might have an opportunity to change something, but you're wasting it by indulging in your sense of entitlement over others and in justifying being mean to them. As opposed to devising a strategy that might actually work in producing change in others.baker

    How do you know I'm not doing that? And are you doing anything other than acting as an apologist for the people standing in the way of fixing things? Answer me what's worst? Not standing in the way of necessary change or defend those who stand in the way? What's the point in that?

    And you phrase that I'm only acting like this to justify to be mean, which is an intentional misinterpretation of what I actually do. I am mean because that's what apologists deserve as they are actively standing in the way of necessary change. Collectively these people form the political public opinion that holds these necessary changes back and in turn they are indirectly responsible for any deaths linked to the rapidly changing climate. I am very much in the right to treat them accordingly:

    If you are told that certain paths ahead will result in people dying and societies getting destroyed and that there's a path to take to avoid all that, and you actively choose to not take the path to avoid it, but not only that you actively try to sabotage anyone trying to change it, as well as spreading disinformation denial of all of it even being an issue; then that's a deliberate act of sabotage that has a direct link to the consequences that could have been avoided.

    It's like if you have a drunk person in front of you who says he's going to drive home and you know that this path will go past a school. There's a person trying to convince him that he shouldn't drive home drunk but there's also another person who's trying to just push that person aside and tell the drunk that he shouldn't listen, that there is no risk, there's no problem, just drive home and do it as fast as he can. You have the choice to support one of these opposing sides; to support the person who tries to talk sense into the drunk by pushing aside the one trying to get him to drive. But you could also support the other person and push aside the one trying to talk sense into the drunk.

    Neither of your actions in that scenario leads to you directly being the one running over and killing kids. But it's quite clear that your actions help push a certain line of opinions towards an action that would quite possibly do that. Why wouldn't anyone put you into partial blame for what happened if kids got run over by the drunk?

    This is why people who are apologists for those standing in the way of necessary change in society towards mitigating climate change should be viewed as immoral and they should be treated accordingly. So I have no problem being harsh or mean towards these people and that's not an entitlement, that's just me having a working moral compass. The ones who are the really entitled people in all of this are the ones who don't want to change their ways and actively fight against anything that would require them to do so, even if not changing would lead to kids getting run over in the future.

    And with this in mind, what do you think is the best way to approach people?baker

    As I have said, trying to talk sense into them does not work. It has been the strategy for decades. If they are uneducated, egocentric and acting like gullible idiots, then you can try and convince them all you like and they will still not budge.

    If that leads to time running out to implement the necessary changes, then you simply have to just don't give a shit about them and just do what's needed. It's that simple. There's no time to change the minds of people who actively fight against having their minds changed or being properly educated. So politicians and industry people need to simply do things anyway, even if it risks losing votes. But then again, people would vote some idiot into power that would just push for ignorant policies. If we can underline the immorality of regular people acting against necessary change, then we can at least create a cultural foundation of the morality of regular people within this subject matter and as I've exemplified I have no problem calling them immoral and people should stop beating around the bush on this as well.
  • Climate change denial
    and so when members bring their two cents to the issue, it makes knowing who to ignore on others issues very clear. So that’s useful. I say there’s been anywhere from 6-12 people so far. Saves me time.Mikie

    Yes, how people handle knowledge is a trait transcending specific subjects. I tend to see even within science that people who have biased ideas about something else tend to be biased in their scientific research as well. That's why I'm always skeptical about religious physicists. At most they have to be agnostic, but having a strong belief at the same time as conducting science mostly seem to influence how they treat their own conclusions, sticking to their guns further than others when facing criticism. Thank the gluons we have consensus praxis.

    I’m thinking of going to an evolutionary biology course and explaining to the professor that the reason the subject is “controversial” is because they’re too mean, not empathic enough, not effective in how they communicate, are too harsh or judgmental, etc. I’ll pretend to be a Buddhist monk like Thich Nhat Hanh. This way I can feel like I’m involved in evolutionary biology.Mikie

    :lol:
  • Climate change denial
    We who? Educational? You have shown that you do not even know statistics, how are you going to educate anyone?Lionino

    We who actually understand the science, we who understand the problems, we who don't attach identity to this entire subject and use it as punchlines for something else.

    And not doing the research that people should actually do on their own is not the same as "not know the statistics". You can search publications yourself, you can dive into all that research, the information is everywhere if you know where to get scientific data. And there are plenty who are trying to educate people, but you know, people don't listen, because they don't care to listen, they decide what they agree with or not before they hear it, they decide based on arbitrary ideals and emotional reasoning.

    Then they formulate arguments around such biases and believe they are actually intellectually engaging with the subject matter. But they're not, they're using rhetorical twists and turns not to convince the other side, but to make sure they're never acknowledging themselves to be wrong. It's the same behavior as flat earthers and other conspiratorial behaviors. It doesn't matter that there's a truckload of evidence, that there are educators and experts everywhere that friendly provide their knowledge if they want it, or that the publications are out there to be found if they wanted. The reason has nothing to with what is truth or not for them, it has to do with them.

    In this individualistic "me me me" society we've collectively nurtured a population into putting their own asses into a position where they believe they are the center of the universe, knowing all and having the ability to judge what is true or not. People are gullible idiots in their basic form and only their behavior towards knowledge define their ability to truly navigate the complexity of our reality. We've just entered an era in which the important lesson of handling knowledge with care has been pushed down by the ego of individuals.
  • Climate change denial
    You're inconsistent.


    This is the choice of that defines the coming decades of the world.
    What choice, if you plan to "run them over"?
    baker

    How is that inconsistent? I mentioned what is needed to be done to change course and if people don't make that choice then the only outcome is for everything to collapse until the world's population beg for changes. To speak about different possible outcomes does not make what I said inconsistent.
  • Donald Trump (All General Trump Conversations Here)
    Since I don't believe that democracy is a good or viable way to organize society, the point is moot anyway. If anything, I'm a monarchist.baker

    I agree that democracy has problems, but the solution isn't autocracy. It's to evolve democracy into a better system.

    The problem, however, is that people in society doesn't seem to have the capacity to actually evolve systems and ideas. Rather, they attach their identity to a system they prefer and defend it to death.

    Democracy is far better than any other system in existence right now. However, it is easily skewed by corruption and demagogues. So the solution needs to take care of those problems as a feature of the system. Right now, countries with low corruption and responsible politicians do actually show examples of how good a society can be if things function, but there are no guardrails against such a society falling into corruption and irresponsible politics, so we're basically just accepting democracy as being a thin bridge with a drop to the death underneath, and hope that we can balance the strong winds without guardrails.

    Then, let's build the damn guard rails instead of thinking that we should just bash old concepts against each other like any of them are a solution. None of them are, all of them have faults. We should look towards what works and what doesn't, and build from that. But society is too naive, too up their own asses in thinking they are intellectuals. Marx didn't have a theory to bash against capitalism, he looked at the problems and pointed them out with a new theory. And while I'm not saying Marxism is the solution, I'm saying that no one actually does any damn thinking towards improving society, people only play political philosophy these days.
  • Climate change denial
    Sabine does a good job of calmly saying "get your fucking shit together". But unfortunately nothing will happen until the people in power and industries around the globe start to see their resources dwindle or when people had enough of floodings and heat waves and start removing these powerful people by force.

    Regular people will continue to not want to change, until the world changes so much that they simply have to.
  • Donald Trump (All General Trump Conversations Here)
    You do realize that right-wingers present themselves as the great "defenders of democracy"? That they accuse the centrists and lefties of "demagogy"? That they are "working hard" to "educate the people" and to open their eyes to make them "see the truth"?
    This is right-winger language.
    baker

    That they take concepts, words and language and twist them does not mean the core of their sentences mean the same. That they manipulate people through twisting language just becomes another tool of power.

    If people can't tell the difference between propaganda and analysis... well, then there's nothing to be done. If you can't understand the difference, then how could anything ever put you into expanded perspectives?

    It's not "Shakespearean". Please.baker

    It's not wrong either.