• Where Does Morality Come From?
    My personal view on morality is that it is partially innate, in the sense that our individual moral code is based entirely on our evolved emotional responses to individual situations. The notion that there would be some universal moral code doesn't make much sense from that point of view. People are only as moral as they desire to be as they feel they are capable. The reason that different cultures have different moral codes comes down to the interaction of the sum of all individual moral codes. If you feel that something is right, and a larger group of people feel it is wrong, it is more advantageous to change your moral code than to go against the group. That's why different cultures have different moral standards. When it comes to morals, you either stay with the majority or face the consequences and that is decided on a subconscious level that many people consider to be their "conscience".
  • The relationship between desire and pleasure
    I also think it is important to note that I view the desire for pleasure as an escape from pain. If a human were to do absolutely nothing, it would begin to feel pain. Probably thirst or hunger first. I think that would continue infinitely in any case where pain is possible. So I guess I have this backward. Maybe its the aversion to pain that is the foundation to human motivation
  • The relationship between desire and pleasure
    What about the desire for death? Not common, but some people clearly desire death, and through suicide find it. Pleasure doesn't seem to figure into it. For instance, the pleasure of escaping suffering seems far fetched. We get relief, but not pleasure per se when suffering ends.Bitter Crank

    I guess I wasn't as specific in my writing as I was in my thinking. I view pleasure and pain as being on a spectrum, so I would view suicide as being about pleasure in the sense that it is an escape from pain. Basically two sides of the same coin.

    Like, "Everything people do is in the service of the sex drive."--a crude misstatement of Freud's theories. Yes, some of our behavior is very much in the service of sex--or libido--but it's difficult to figure how Einstein (or a few hundred thousand scientific researchers and theorists are all trying to serve their sex drives by thinking about relativity, the Standard Model, Quarks, String Theory, or whatever the hell they are thinking about.Bitter Crank

    Interesting that you brought up this part. Makes me think of the quote, "Everything in life is about sex, except sex. Sex is about Power." When it comes to how that relates to scientists, I think it is theoretically possible that their driving desire is curiosity. But then why do humans have curiosity?? Because it has been evolutionarily beneficial in the past to be curious. Whether in terms of personal safety or any other reason that curiosity might be beneficial to the individual. I think simply because that individual has adopted the genes/ social conditioning to be curious can be tied back to sex in some way, because I believe that every trait we are capable of having as humans is for the sake of securing the ability to reproduce. Even that quote, "Sex is about Power" They are obviously intimately related, but is sex about power? Power about sex? I think it's probably the latter, I think humans desire power because it is another thing that increases the probability that they will mate in the future

    Similarly, people get up and go to work everyday at the same, fucking shithole of a job -- because their families depend on their income, and they want to see their children eat well. They get pleasure from that, but again, it's not like the pleasure of Ben and Jerry's. It's much more complicated.Bitter Crank

    I would agree with you here. It is incredibly complicated. I suppose it could be argued that the concept of their children not eating well provides them with pain. But I'm sure that there is a lot of beliefs/ conditioning that factor into it too. I'm just proposing that pleasure and pain are at the root of those beliefs. There would be other factors at play but I think that pain and pleasure would be the root of those factors as well.
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    This is a term used to describe a person making a clear irrational decision, say to have a quick fun fling, at the cost of sometimes a great percentage of ones finances, the security of one's family, one's job, etc.noAxioms

    I suppose you are right in the sense that there will always be aspects of human nature that work separate from logical faculties.

    The AI subject interests me a lot, partly due to be being close to the business.noAxioms

    You're involved in AI?

    The first is more like the scientific method. Start without knowing whatever it is you're trying to discover, and come to some conclusion after unbiased consideration of all sides. Rationalizing is what a government study often does: Start with an answer you want to prove and choose evidence that supports it.noAxioms

    But is the latter not entirely what scientific method is? Any experiment conducted with the scientific method starts with a hypothesis of what you are trying to prove. Isn't any attempt to understand the world rationalizing?

    The AI can be as smart as it wants, but eventually it will have to put restraints on the lifestyle envisioned by "give peace a chance", and those restrains will be resented.noAxioms

    This is what I am suggesting in my original post. People want the world to be peaceful, but the same people don't want to give up what it is that make them human in the first place. If peace is a freedom from disturbance, it is unattainable through human instincts
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    Survival of the fittest refers to a fit species, not a fit individual.noAxioms

    Perhaps that's true on a grand scale, but it doesn't change that fact that the survival of the species is dependent on the survival of its individual. I don't think I ever said that survival of the individual was more important than the survival of the species. What I am saying, is that anything that has evolved into any species over time is either something that has enabled it to survive in the past, or something that has mutated off of something that has enabled it to survive in the past. As for you being fit, it certainly doesn't hurt the species for you to have the desire to be fit. That is beneficial for the species as well as you.

    I don't think you can choose rationally, except in cases where it doesn't matter to your core instincts.noAxioms

    I'm not saying I live a life devoid of anything other than reason. I'm curious what you mean by core instincts though. Like fight or flight? Then no my rational mind would be overpowered. Emotions? Desire? While both of these are arguably instinctual, as in I have no real control over what I want or how I feel, It is possible to understand them further and make rational decisions on how to deal with them. You ask for examples? I can understand what makes me happy, what makes me sad, or what I desire and I can use logic to satisfy my desires and avoid being sad as much as possible.

    I had my own, and finally rationalized something (on the order of for whose benefit do I draw breath?) that blatantly conflicted with the irrational assumptions, and the belief was not open to being corrected.noAxioms

    Why wasn't it open to being corrected?

    The super-AI, having no history of evolution to give it fit beliefs instead of true ones, might actually be rational and would believe things no humans considered because we think we know it all, and would then behave in a way quite unanticipated to us.noAxioms

    What do you mean when you say it might be rational? What is the difference between being rational and rationalizing something?

    The danger of it is that we can't predict what a greater intelligence will figure out any more than mice would have anticipated humans knowing about quantum mechanics.noAxioms

    I don't necessarily think that is true. That depends entirely on how we program it. If we define intelligence as being the ability to acquire knowledge and skills, by creating superintelligence, we're really just speeding up the ability to do that. Any use of knowledge and skill is only useful in the ability to use it. If it were to be used in terms of problem-solving, I think we would rapidly solve all of our problems until the problem of survival is the only one left. Then what? Transcend time itself maybe, but I can't even pretend to know what that means.
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    Being fit. It does me no benefit to be fit, but that's how I'm programmed.noAxioms

    Being fit is a good purpose in life, but the desire to be fit can be reduced to survival instinct. Hence the phrase survival of the fittest. Being fit is just another aspect of maintaining a quality life.

    I think I understand it, and the irrational is in charge. Doesn't need to be, but the part in charge seems also in charge of which half is in charge. That means I want to be irrational. I have no desire to let the rational part of me call the shots. It hasn't figured out any better goals so it would only muck things up.noAxioms

    I guess I agree that the irrational is in charge, I think that our emotions dictate any rational decision making. I don't think there is any escaping that for the time being. I still choose to live my life through my rational mind. I think that if I can understand the irrational foundation of my mind I can do a better job at satisfying it. But I suppose it's possible I will come to a different conclusion later in life
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    Oh damn. I didn't know that. I liked what you said above btw. Interesting way of looking at things
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    I only read beyond good and evil but I'm familiar with some of the concepts from his other books. Why doesn't the will to power count?
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    Lol but Nietzsche was the first philosopher I ever read. It was Nietzschean themes in pop culture that got me asking the deeper questions in life to begin with. You think that materialist philosophy is nonsense, what else is there though?
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    I wouldn't say I'm particularly enthusiastic about the elimination of the individual, I just think that many of the problems in the world are caused by individuality. I don't run my life around my sense of self. Along with other things I mentioned above, I see the sense of individuality as being a tool for survival. I don't limit myself based on who I think I am. Being an individual is only important to people because it is central to their ability to attain positive emotion. But the desire to attain positive emotions comes with a cost, the negative emotions.

    I'm arguing that the hedonic treadmill we all live on is a deeper part of human nature than individuality. This begs the question, is it worth it? Is it worth being forced to experience the negative in life in order to experience the positive. I don't think it is. People may feel that losing that part of themselves would be losing the meaning in life, but I would argue that life has no meaning. The closest thing that can come to providing a central meaning in life is the concept of survival. I think that survival is at the base of every human concept. I'm not sure who said it, but there's a quote that goes something like this. "Perfection isn't attained when there is nothing more to add, but when there is nothing left to take away." Any goal we have in trying to solve the problems of the world, any attempt to make life better, will inevitably lead to losing the things that we think are important. This is because deep down, out of our control, there is nothing more important to us as a species than survival. If we take away the ability to feel positive and negative emotions, I don't believe individuality will be very important to us anymore. I think with the development of AI, that is what will happen. I'm not really enthusiastic about it. It probably won't happen in my life time. But if we continue to try and solve the problems of humanity, we aren't going to recognize it anymore.

    I personally don't want to play any part in that. I would much rather enjoy my life because I don't see any of this happening in my lifetime. This will all be developed with more efficiency by a superintelligent AI than I could ever provide, and it will be run by people with the financial incentive to do so. This isn't really about what I think should be done, even though I do think it should be. It is about what I think will be done, by powers that are out of my control and out of my desire to change.
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    Survival is not my primary goal, but merely a means to the perpetuation of my genes.noAxioms

    If survival isn't your primary goal, then what is?

    The irrational is in charge, and the rational part of me is only a tool to it, not what drives my goals.noAxioms

    I agree that the rational part is only a tool, but is it true that the irrational is in charge or is it just an aspect of your nature that you don't have a rational understanding of yet?
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    I don't think that the hive mind is a dead end. I think it is a very possible direction to go in through AI. I was planning on breaking my post up but I decided to eat something first. But I'm not sure what you mean by suggesting something is the matter with me. And for christ sake stop calling me Shirley.
  • Artificial Super intelligence will destroy every thing good in life and that is a good thing.
    Lol sorry. Presenting my thoughts in an easily digestible way has never been a talent of mine.
  • Technological Hivemind
    You didn't get the argument then. If the hive mind is one puree mind, then joining it is, for all practical purposes, like death. I might as well kill myself because there is no benefit to me.Chany

    If you believe that it wouldn't benefit you in any way, and is equivalent to death, I'm curious what your purpose in life is now. In what way would the hive mind interfere with that purpose?
  • Technological Hivemind
    What I imagine you are thinking of is the pureed mind, where all individual distinctions have been lostBitter Crank

    That would be a correct assumption. What I am thinking of is a way of connecting ideas. Most of our problems in society today come about as a result of differing ideology's. What this give mind would do, by eliminating some aspects of the self, would provide a good way to meet in the middle. If the purpose of society is to promote harmony and cooperation, then this would be the perfect society. I think that a lot of people have a strong belief system that they hold as a part of their identity. Not everyone's belief systems can be true. The only way to effectively measure the truth is through scientific investigation. Before science, different ideology's evolved as a mechanism for survival. Good ideology's survived poor ones died out. Individuality was important to that process. It was the right way to do things. But there comes a point where individuality doesn't serve a purpose beyond self interest and we are approaching that point rapidly. What purpose does the concept of individuality serve beyond self interest? If self interest is the motivation behind the desire for individuality, wouldn't it be best to unite our self interests through the use of a superintelligent AI and connect ourselves to it? Wouldnt that solve an astronomical amount of our problems? If the sacrifice of individuality was necessary for a permanent reduction/elimination of long term suffering, isn't that the best option we have for our future? Wouldn't it be morally wrong to cling to our individuality under the belief that it makes us 'special' in some way?
  • Is happiness a zero-sum game?
    Yes, Happiness is a biological phenomenon, and yes the concept is referring to that phenomenon. You could say the same about suffering though. Is there any scientific evidence to suggest that the two aren't related to each other? Is there any reason to believe that they don't exist on opposite ends of a spectrum? Does happiness really benefit your health, or is it your health that contributes to your happiness? Same for fitness. Same with your social skills. Couldn't it go the other way as well? Wouldn't suffering do the opposite to all of those things? It makes sense to me that happiness is the evolved chemical reward for doing things beneficial to survival and suffering is the punishment. If we as a society were to get rid of EVERY cause of suffering, that reward would no longer be a reward. If everyone were 'happy' all of the time, would they really be happy? Would we still have use for that concept? Or would everything just be completely neutral?
  • Is happiness a zero-sum game?
    But would the concept of happiness exist without the concept of suffering/sadness? Would happiness have any value to us as humans if we had never experienced suffering?
  • Technological Hivemind

    Suppose at some point in the future we manage to create a 3d printer capable of printing on a subatomic level. Anyone would be able to use that device to create a nuclear bomb and immediately wipe out humanity. This is just one example of very dangerous theoretical technology. we are approaching a point technologically where eventually freedom of thought will be too dangerous to be considered. Is it really necessary in the first place? Why is freedom important? So that I am not oppressed by an unjust power? So that I have the freedom to do as I please? When the concept of freedom was first used, an idea like this would have been inconceivable. People demand freedom so that no one interferes with their quest to happiness. If we all shared the same happiness, would freedom still be a meaningful concept? Would individuality? Is anything more valuable than happiness?