• Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    This is a term used to describe a person making a clear irrational decision, say to have a quick fun fling, at the cost of sometimes a great percentage of ones finances, the security of one's family, one's job, etc.noAxioms

    I suppose you are right in the sense that there will always be aspects of human nature that work separate from logical faculties.

    The AI subject interests me a lot, partly due to be being close to the business.noAxioms

    You're involved in AI?

    The first is more like the scientific method. Start without knowing whatever it is you're trying to discover, and come to some conclusion after unbiased consideration of all sides. Rationalizing is what a government study often does: Start with an answer you want to prove and choose evidence that supports it.noAxioms

    But is the latter not entirely what scientific method is? Any experiment conducted with the scientific method starts with a hypothesis of what you are trying to prove. Isn't any attempt to understand the world rationalizing?

    The AI can be as smart as it wants, but eventually it will have to put restraints on the lifestyle envisioned by "give peace a chance", and those restrains will be resented.noAxioms

    This is what I am suggesting in my original post. People want the world to be peaceful, but the same people don't want to give up what it is that make them human in the first place. If peace is a freedom from disturbance, it is unattainable through human instincts
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    Survival of the fittest refers to a fit species, not a fit individual.noAxioms

    Perhaps that's true on a grand scale, but it doesn't change that fact that the survival of the species is dependent on the survival of its individual. I don't think I ever said that survival of the individual was more important than the survival of the species. What I am saying, is that anything that has evolved into any species over time is either something that has enabled it to survive in the past, or something that has mutated off of something that has enabled it to survive in the past. As for you being fit, it certainly doesn't hurt the species for you to have the desire to be fit. That is beneficial for the species as well as you.

    I don't think you can choose rationally, except in cases where it doesn't matter to your core instincts.noAxioms

    I'm not saying I live a life devoid of anything other than reason. I'm curious what you mean by core instincts though. Like fight or flight? Then no my rational mind would be overpowered. Emotions? Desire? While both of these are arguably instinctual, as in I have no real control over what I want or how I feel, It is possible to understand them further and make rational decisions on how to deal with them. You ask for examples? I can understand what makes me happy, what makes me sad, or what I desire and I can use logic to satisfy my desires and avoid being sad as much as possible.

    I had my own, and finally rationalized something (on the order of for whose benefit do I draw breath?) that blatantly conflicted with the irrational assumptions, and the belief was not open to being corrected.noAxioms

    Why wasn't it open to being corrected?

    The super-AI, having no history of evolution to give it fit beliefs instead of true ones, might actually be rational and would believe things no humans considered because we think we know it all, and would then behave in a way quite unanticipated to us.noAxioms

    What do you mean when you say it might be rational? What is the difference between being rational and rationalizing something?

    The danger of it is that we can't predict what a greater intelligence will figure out any more than mice would have anticipated humans knowing about quantum mechanics.noAxioms

    I don't necessarily think that is true. That depends entirely on how we program it. If we define intelligence as being the ability to acquire knowledge and skills, by creating superintelligence, we're really just speeding up the ability to do that. Any use of knowledge and skill is only useful in the ability to use it. If it were to be used in terms of problem-solving, I think we would rapidly solve all of our problems until the problem of survival is the only one left. Then what? Transcend time itself maybe, but I can't even pretend to know what that means.
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    Being fit. It does me no benefit to be fit, but that's how I'm programmed.noAxioms

    Being fit is a good purpose in life, but the desire to be fit can be reduced to survival instinct. Hence the phrase survival of the fittest. Being fit is just another aspect of maintaining a quality life.

    I think I understand it, and the irrational is in charge. Doesn't need to be, but the part in charge seems also in charge of which half is in charge. That means I want to be irrational. I have no desire to let the rational part of me call the shots. It hasn't figured out any better goals so it would only muck things up.noAxioms

    I guess I agree that the irrational is in charge, I think that our emotions dictate any rational decision making. I don't think there is any escaping that for the time being. I still choose to live my life through my rational mind. I think that if I can understand the irrational foundation of my mind I can do a better job at satisfying it. But I suppose it's possible I will come to a different conclusion later in life
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    Oh damn. I didn't know that. I liked what you said above btw. Interesting way of looking at things
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    I only read beyond good and evil but I'm familiar with some of the concepts from his other books. Why doesn't the will to power count?
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    Lol but Nietzsche was the first philosopher I ever read. It was Nietzschean themes in pop culture that got me asking the deeper questions in life to begin with. You think that materialist philosophy is nonsense, what else is there though?
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    I wouldn't say I'm particularly enthusiastic about the elimination of the individual, I just think that many of the problems in the world are caused by individuality. I don't run my life around my sense of self. Along with other things I mentioned above, I see the sense of individuality as being a tool for survival. I don't limit myself based on who I think I am. Being an individual is only important to people because it is central to their ability to attain positive emotion. But the desire to attain positive emotions comes with a cost, the negative emotions.

    I'm arguing that the hedonic treadmill we all live on is a deeper part of human nature than individuality. This begs the question, is it worth it? Is it worth being forced to experience the negative in life in order to experience the positive. I don't think it is. People may feel that losing that part of themselves would be losing the meaning in life, but I would argue that life has no meaning. The closest thing that can come to providing a central meaning in life is the concept of survival. I think that survival is at the base of every human concept. I'm not sure who said it, but there's a quote that goes something like this. "Perfection isn't attained when there is nothing more to add, but when there is nothing left to take away." Any goal we have in trying to solve the problems of the world, any attempt to make life better, will inevitably lead to losing the things that we think are important. This is because deep down, out of our control, there is nothing more important to us as a species than survival. If we take away the ability to feel positive and negative emotions, I don't believe individuality will be very important to us anymore. I think with the development of AI, that is what will happen. I'm not really enthusiastic about it. It probably won't happen in my life time. But if we continue to try and solve the problems of humanity, we aren't going to recognize it anymore.

    I personally don't want to play any part in that. I would much rather enjoy my life because I don't see any of this happening in my lifetime. This will all be developed with more efficiency by a superintelligent AI than I could ever provide, and it will be run by people with the financial incentive to do so. This isn't really about what I think should be done, even though I do think it should be. It is about what I think will be done, by powers that are out of my control and out of my desire to change.
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    Survival is not my primary goal, but merely a means to the perpetuation of my genes.noAxioms

    If survival isn't your primary goal, then what is?

    The irrational is in charge, and the rational part of me is only a tool to it, not what drives my goals.noAxioms

    I agree that the rational part is only a tool, but is it true that the irrational is in charge or is it just an aspect of your nature that you don't have a rational understanding of yet?
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    I don't think that the hive mind is a dead end. I think it is a very possible direction to go in through AI. I was planning on breaking my post up but I decided to eat something first. But I'm not sure what you mean by suggesting something is the matter with me. And for christ sake stop calling me Shirley.
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    Lol sorry. Presenting my thoughts in an easily digestible way has never been a talent of mine.
  • Technological Hivemind
    You didn't get the argument then. If the hive mind is one puree mind, then joining it is, for all practical purposes, like death. I might as well kill myself because there is no benefit to me.Chany

    If you believe that it wouldn't benefit you in any way, and is equivalent to death, I'm curious what your purpose in life is now. In what way would the hive mind interfere with that purpose?
  • Technological Hivemind
    What I imagine you are thinking of is the pureed mind, where all individual distinctions have been lostBitter Crank

    That would be a correct assumption. What I am thinking of is a way of connecting ideas. Most of our problems in society today come about as a result of differing ideology's. What this give mind would do, by eliminating some aspects of the self, would provide a good way to meet in the middle. If the purpose of society is to promote harmony and cooperation, then this would be the perfect society. I think that a lot of people have a strong belief system that they hold as a part of their identity. Not everyone's belief systems can be true. The only way to effectively measure the truth is through scientific investigation. Before science, different ideology's evolved as a mechanism for survival. Good ideology's survived poor ones died out. Individuality was important to that process. It was the right way to do things. But there comes a point where individuality doesn't serve a purpose beyond self interest and we are approaching that point rapidly. What purpose does the concept of individuality serve beyond self interest? If self interest is the motivation behind the desire for individuality, wouldn't it be best to unite our self interests through the use of a superintelligent AI and connect ourselves to it? Wouldnt that solve an astronomical amount of our problems? If the sacrifice of individuality was necessary for a permanent reduction/elimination of long term suffering, isn't that the best option we have for our future? Wouldn't it be morally wrong to cling to our individuality under the belief that it makes us 'special' in some way?
  • Is happiness a zero-sum game?
    Yes, Happiness is a biological phenomenon, and yes the concept is referring to that phenomenon. You could say the same about suffering though. Is there any scientific evidence to suggest that the two aren't related to each other? Is there any reason to believe that they don't exist on opposite ends of a spectrum? Does happiness really benefit your health, or is it your health that contributes to your happiness? Same for fitness. Same with your social skills. Couldn't it go the other way as well? Wouldn't suffering do the opposite to all of those things? It makes sense to me that happiness is the evolved chemical reward for doing things beneficial to survival and suffering is the punishment. If we as a society were to get rid of EVERY cause of suffering, that reward would no longer be a reward. If everyone were 'happy' all of the time, would they really be happy? Would we still have use for that concept? Or would everything just be completely neutral?
  • Is happiness a zero-sum game?
    But would the concept of happiness exist without the concept of suffering/sadness? Would happiness have any value to us as humans if we had never experienced suffering?
  • Technological Hivemind

    Suppose at some point in the future we manage to create a 3d printer capable of printing on a subatomic level. Anyone would be able to use that device to create a nuclear bomb and immediately wipe out humanity. This is just one example of very dangerous theoretical technology. we are approaching a point technologically where eventually freedom of thought will be too dangerous to be considered. Is it really necessary in the first place? Why is freedom important? So that I am not oppressed by an unjust power? So that I have the freedom to do as I please? When the concept of freedom was first used, an idea like this would have been inconceivable. People demand freedom so that no one interferes with their quest to happiness. If we all shared the same happiness, would freedom still be a meaningful concept? Would individuality? Is anything more valuable than happiness?
  • Technological Hivemind
    To me it seems the internet is just the beginning. What happens when we start implanting technology into our brains? We already have governmental surveillance of virtually everything we do. If we were to start uploading our thoughts into computers the government would be able to see that too.And I think it's only a matter of time until we do this. Elon Musk is already trying to. The only way to stop the government from misusing that kind of power would be for everyone's thoughts to be accessible to anyone. I don't think that would be a bad thing. It could play a huge part in putting an end to unnecessary suffering.
  • Technological Hivemind
    And not even that, because most people are, and must be, on the "exploited" end of the stick.Bitter Crank

    But is there anything better?

    Combining the alleged selfish motivations of 7 billion people into one mega-self might produce a hellish monster of cosmic-scale greedBitter Crank

    Would there be any negative consequences to that though? What is greed if there is no separate individual left for it to negatively affect?

    It may be uncomfortable to think about being dead, but once you were dead, you wouldn't really care.

    Same thing.
    Bitter Crank

  • Technological Hivemind
    If we fused our minds (perish the thought) there wouldn't be "everyone" anymore, just one big ME. There would be no "greater good", only MY good. And who would ever remind the great ME to waste less than vast amounts of time on trivial matters? Nobody. There would be no one else.Bitter Crank

    Would it be a bad thing if it were just one big me? I believe that most people are motivated mainly by self-interest. That's why capitalism works so well. It's a system that functions by exploiting people's selfish desires. But it is an imperfect system. Often times one's quest for personal gain comes at the expense of other people. The simplest way to fix that in my opinion, would be to combine all individual selfs into a single mind. It could be a utilitarian utopia and I can't come up with any actual downsides to it. It may be uncomfortable to think about a loss of individuality, but once it was gone I don't think you would really care.