• Shawn
    13.3k
    I see the future as pretty darn bright. Thing like the end of the fossil fuel age don't really concern me. It's just basic economics, as long as it's cheap and the cost benefit allows it to evolve, then we pursue that avenue. My conception of the future is quite, well, futuristic. Namely, I see the advent of AI as bringing forth enormous amounts of prosperity, for the masses and not the few, since you can't patent AI and it's self evolving, meaning that you can't really control it either. I also see enormous potential changes in the energy sector as new methods of fusion based devices will enable prosperity for the many.

    I used to subscribe to the singularity movement where many things will happen at once when AI arrives on stage; but, my personal opinion is that it might take longer than an instant for things to change. I also think we will likely become a multiplanetary species within the next decade or more.

    How do you think changes will occur, or what is your conception about the future as you see it? I am also quite interested in seeing what happens to the current education system, as I think it will have to evolve or change to become relevant in this new coming time.
  • iolo
    226
    I'd give humanity three hundred years tops. I think they'll probably give up education, in the circumstances.
  • Marchesk
    4.6k
    I used to subscribe to the singularity movement where many things will happen at once when AI arrives on stage; but, my personal opinion is that it might take longer than an instant for things to change.Posty McPostface

    The longer it takes, the better. A hard takeoff singularity is probably disastrous, as they're no way human society can adapt that quickly, and you end up with powerful technologies run amok. There's plenty of dystopian fiction exploring that sort of thing, and the friendly AI movement hopes the proper precautions are in place before we have general purpose AIs.

    I also think we will likely become a multiplanetary species within the next decade or more.Posty McPostface

    I have my doubts. Mars is less hospital than the center of Antartica in the middle of the winter, and it's much farther away. That makes it very expensive and risky, and for what? To have a dozen or so humans call it home? They will be confined indoors on inside a suit at all times.

    Exploring Mars with better robots and at some point human beings, sure. But living there? Maybe in the long, long run when we can terraform the planet.

    How do you think changes will occur, or what is your conception about the future as you see it?Posty McPostface

    People at the turn of 20th century were similarly optimistic, then we had two world wars, a nuclear arms race, and wide spread environmental concerns. We could still have WW3, and an environmental collapse is a definite possibility.

    That being said, I'm more on the optimistic than pessimistic side about human civilization persisting and advancing, despite whatever difficulties the 21st century holds. But we really don't know whether civilization is inherently unstable and always leads to collapse, no matter the level of technology. It has so with all past human civilizations. We don't anything know about alien ones, if they're out there. But one possible resolution to the Fermi paradox is that civilizations don't last, or there's a great filter ahead for us.

    Or maybe when we achieve a post-singularity world, they'll welcome us into the galactic club. However, imagine what a post-singularity world war would look like. Weaponized AI, gray goo, antimatter bombs, super virues, and I'm sure nukes can still have their place.
  • Shawn
    13.3k
    The longer it takes, the better. A hard takeoff singularity is probably disastrous, as they're no way human society can adapt that quickly, and you end up with powerful technologies run amok. There's plenty of dystopian fiction exploring that sort of thing, and the friendly AI movement hopes the proper precautions are in place before we have general purpose AIs.Marchesk

    Yeah, I think you're right about a hard takeoff being too much for humans to adapt to at the get-go. However, the rise of AI cannot be in some sense slowed down. I don't necessarily think this is a bad thing for us, as long as the alignment problem can be solved. I don't think the control problem can solved either. It will do as it wishes and if that includes a psychopathic 'desire' to eradicate us, then there's no hope.

    I have my doubts. Mars is less hospital than the center of Antartica in the middle of the winter, and it's much farther away. That makes it very expensive and risky, and for what? To have a dozen or so humans call it home? They will be confined indoors on inside a suit at all times.

    Exploring Mars with better robots and at some point human beings, sure. But living there? Maybe in the long, long run when we can terraform the planet.
    Marchesk

    Well, as long as people want to explore Mars, then you can't really deter them from that desire. People still desire to climb mount Everest, for whatever reason, so let it be?

    People at the turn of 20th century were similarly optimistic, then we had two world wars, a nuclear arms race, and wide spread environmental concerns. We could still have WW3, and an environmental collapse is a definite possibility.Marchesk

    Well, to dumb it down (not that you need it be dumbed down, no insult implied) we have three events facing us as a species.

    1. The rise of general artificial intelligence.
    2. Ubiquitous energy for all, through fusion and renewable energy sources.
    3. Becoming an interplanetary species.

    1, Is the most problematic, in my opinion, since I agree with Musk and others that AI is a real concern for us as a species. I have some ideas as to how to mitigate this problem. Namely, I think that if the human brain can be simulated, and thus give rise to AI, then AI will have human emotions equipped in it to relate to us humans. In some sense it will have a soul or 'psyche' which can be related towards and reciprocate towards.

    2 and 3, aren't inherently dangerous, so again the main concern is #1.

    That being said, I'm more on the optimistic than pessimistic side about human civilization persisting and advancing, despite whatever difficulties the 21st century holds. But we really don't know whether civilization is inherently unstable and always leads to collapse, no matter the level of technology. It has so with all past human civilizations. We don't anything know about alien ones, if they're out there. But one possible resolution to the Fermi paradox is that civilizations don't last, or there's a great filter ahead for us.Marchesk

    I'm also optimistic. I think civilizations can persist if we can overcome some literally, MAD policies towards each other. It's like game theory in terms of the prisoner's dilemma, and the sooner we can have a remanence of civilization live off world, then MAD becomes useless.

    Or maybe when we achieve a post-singularity world, they'll welcome us into the galactic club. However, imagine what a post-singularity world war would look like. Weaponized AI, gray goo, antimatter bombs, super virues, and I'm sure nukes can still have their place.Marchesk

    I have some science fiction ideas about humanity experiencing a revolution in our nature via AI. I don't think any civilization can survive with violent tendencies. If we can overcome that, then half of our troubles with our survival as a species, would look more fortunate.
  • BC
    13.6k
    It isn't with any delight that I shall rain on your parade into the bright future.

    I'd give humanity three hundred years tops.iolo

    That's a reasonably good estimate.

    It will not be evil, stupid, or short-sighted actions that we will undertake in the future that forecloses our future. What forecloses a human future are actions we took in the past--beginning 200 or 300 years ago into the present. We didn't know in 1780, 1840, 1910, or 1950 what the long term consequences of the industrial revolution would be.

    Some people discovered the existence of mortal danger in CO2 emissions around 30 years ago. They happened to be scientists working for very large energy companies. Revealing what they discovered was unthinkable to corporate leaders.

    An Inconvenient Truth was released 12 years ago. The Paris Climate Accord was finished 3 years ago. It has become increasingly apparent that levels of CO2, Methane, and other greenhouse gases are continuing to rise, and with each part per million (PPM) increase, catastrophic global warming becomes more likely. In 1959 the CO2 level was 316.98 PPM (well above pre-industrial levels). 60 years later, we are now regularly above 400 PPM.

    We could, of course, stop emitting CO2 and methane. Simple: Cease burning coal, oil, and gas immediately. No cars, no trucks, no planes, no trains, no tractors, no barges, no heat, no electricity. We would have to abruptly depend on existing wind, solar, nuclear, and hydroelectric generation, which is far, far short of current usage. Life as we know it would come to a screeching halt all over the world.

    We are not going to stop using all the energy we need.

    What is the upshot of this state of affairs?

    The upshot is that we are doomed. Not this week, not next month, not by 2020, not by 2050. If we are lucky, not by 2100. But the climate has already begun to change inconveniently and erratically, and the trends will continue, grow stronger, and become more disruptive. We have likely passed the tipping point where the measures which are FEASIBLE can have a significant effect on the future.

    The world won't come to an end. 300 years from now it will still be spinning and will still be orbiting the sun. The moon will wax and wane. The tides will rise and fall. Most likely we will not be here any more, and many other creatures will be absent as well.
  • BC
    13.6k
    when AI arrives on stagePosty McPostface

    There is nothing inevitable about artificial intelligence. If it ever exists, it will be a product of some large company, or consortium. It will be designed to suit the interests of the class which owns it. It will not exist on its own, "out there", growing like kudzu, burrowing into human civilization and undermining its foundations.

    There is probably not time to develop fusion power (it's been just around the corner, almost ready, all-but working, etc. for decades. That goes for a lot of technological innovations: as global warming becomes a larger and larger threat to existence, fewer and fewer resources will be applied to long term projects which do not address survival.

    We won't be going to mars to escape from earth. There actually isn't any reason to go to mars in person.
  • BC
    13.6k
    I have some science fiction ideas about humanity experiencing a revolution in our nature via AI. I don't think any civilization can survive with violent tendencies. If we can overcome that, then half of our troubles with our survival as a species, would look more fortunate.Posty McPostface

    Science fiction, as you say. Remember, it's more fiction than science.

    What do you mean, "any civilization can [not] survive with violent tendencies"? The Roman Empire was not run by Quakers, the last time I checked, and they lasted for 1400 years as a going concern, and a few centuries more in the East. The Romans cast a 2000 year-long shadow.
  • Noble Dust
    8k


    I'm pretty neutral. The human condition remains a constant. It's an imperfect condition. Technology has always been used for human flourishing and destruction alike. It's hubris to assume that the sheer depth of complexity of tech will bring about a state in which the human condition is improved. Tech that's more and more advanced just means more and more advanced means with which to either further human flourishing, or prevent it. It's the grand ol' enlightenment charade still at work, which is baffling, but really, not that baffling at all. One element of the human condition is that we're all stupid and none of us learn. Which is a harsh way of saying that the new generation doesn't learn from the old generation, ad infinitum.

    The future seems to be defined by hubris. Which is not new.
  • Shawn
    13.3k
    @Bitter Crank

    I don't think climate change is all that bad. I speak dispassionately because that's the only stance one can assume in this scenario. But, it's a no-win situation. We seem able to adapt to various conditions and have in the past, and this time will be no different. Holland is already preparing for floating cities, and other nations are calculating the cumulative losses.

    There was a time in the past when Africa was covered with lush forests, and that might become a reality once again if enough CO2 is released into the atmosphere. I've done some research (a little) and the countries that stands to benefit the most are Russia, Canada, and China from the new climate that awaits our grand grand children (won't have any BTW).

    We will just have to adapt, and it's possible that billions of lives will be taken from the inaction, which is likely becoming a reality. It'll be one of those comic book deaths, where the superhero dies, and then returns in some sequel. Ho-hum, what else can be said about such a dark and daft situation without insulting anyone's sensibilities?
  • Shawn
    13.3k


    One scenario that is likely to occur in my opinion is that we all just detach ourselves from this reality and engage in a virtual one, where our minds are uploaded into some mainframe or cloud computer sufficiently complex. I don't think it's a very edifying future; but, one where we can 'survive' nonetheless.
  • Noble Dust
    8k


    Why is survival presumably a good in that scenario?
  • Shawn
    13.3k


    To each his own? IDK, these are hard questions that aren't being discussed even in academia (let alone in policymaking and places of government) nowadays, what else can be said?
  • Noble Dust
    8k


    Lots can be said. The fact that these issues aren't being discussed highlights their importance; they're elephant-in-the-room questions. Questions that touch on the nature of the human condition, how technology interacts with that, and what technology should be used for are not "to each their own" questions. They're questions with definite answers. Again, what would be the "good" afforded to the human condition by uploading brains to some mainframe situation? Are you assuming that the people in power with control over the "brain uploading scenario" are people with good intentions? I don't see any reason to make that assumption. Bringing the state of the human condition back into the picture, it seems wise to assume no good will; it's wiser to maintain a neutral stance until goodwill can be reasonably demonstrated. What's worse, I think, is that Musk, Zuck, and co aren't particularly of good will or bad; they're moral toddlers playing in a morally doctorate-level game.
  • Shawn
    13.3k


    Morally, there isn't much that can be acted upon here. That was my intention in posting that quip. Millions if not a billion people will die from famine and loss of agriculture. The only good thing is that it's not an asteroid hurtling towards us or some such matter. So, there will be time to adapt if possible.
  • Noble Dust
    8k


    what quip? Do you mean this?

    One scenario that is likely to occur in my opinion is that we all just detach ourselves from this reality and engage in a virtual one, where our minds are uploaded into some mainframe or cloud computer sufficiently complex. I don't think it's a very edifying future; but, one where we can 'survive' nonetheless.Posty McPostface

    ____

    Millions if not a billion people will die from famine and loss of agriculture. The only good thing is that it's not an asteroid hurtling towards us or some such matter. So, there will be time to adapt if possible.Posty McPostface

    I get very tired of these fake posturings where "the future of the race" is held up like a religious symbol. What content does that symbol have? In order for a future that doesn't include you to have content for you, it has to include you. It doesn't make sense. No one is fooled into thinking that (proverbial) you is so selfless that he only wants the best for the human race, regardless of whether you will actually participate in that future world yourself. That putting-off-of-fulfillment is analogous to a religious sacrifice. It is a literal religious sacrifice, but just not conscious.
  • Shawn
    13.3k
    I get very tired of these fake posturings where "the future of the race" is held up like a religious symbol. What content does that symbol have? In order for a future that doesn't include you to have content for you, it has to include you. It doesn't make sense. No one is fooled into thinking that (proverbial) you is so selfless that he only wants the best for the human race, regardless of whether you will actually participate in that future world yourself. That putting-off-of-fulfillment is analogous to a religious sacrifice. It is a literal religious sacrifice, but just not conscious.Noble Dust

    It's just a thought to contemplate, regardless of my level of selflessness or what have you not. I'm not Jesus; but, nor am I some sick and twisted person that sees the possible amount of suffering and death that we face as a race as something that gives me a kick or whatnot. I'm just interested in seeing the least amount of deaths and a reduction in aggregate suffering that people will go through due to inaction on climate change. Maybe this is my default depressive mindset just speaking; but, what can I do about such a predicament?
  • Noble Dust
    8k
    It's just a thought to contemplate, regardless of my level of selflessness or what have you not. I'm not Jesus; but, nor am I some sick and twisted person that sees the possible amount of suffering and death that we face as a race as something that gives me a kick or whatnot.Posty McPostface

    So what? The tepid water of the middle ground is very palatable. That says nothing of it's nutritiousness.
  • Shawn
    13.3k


    I don't entirely see what we are arguing over. Please enlighten me.
  • Noble Dust
    8k


    I think I'm more worried about the future than you. That's my understanding. I'm worried that you're not taking the human condition into consideration here.
  • Shawn
    13.3k


    There's really nothing that I can say that would prove otherwise to what you're claiming. But I hope that the fact that I brought up the topic quells some of your concerns about my intention in making these posts about our future. Even if I decide not to have children I am still interested in people not excessively suffering over a preventable outcome. But then again who am I to make such grandiose claims about preventable outcomes?
  • Noble Dust
    8k


    Fair enough; apologies if I assumed too much.
  • Shawn
    13.3k


    Glad we got that settled. So, what are your thoughts furthermore about our future? Do, you think technology will save us or ultimately be a double edged sword? I would like to think that we could utilize technology to help save us from doom and gloom.
  • iolo
    226
    Bitter Crank - I think you put the climate change question very clearly. I think that even if there were a still a chance of remedial measures working, immediate profits will always come first and put a stop to them. I send a lot of time arguing with extreme-right Americans: it is an experience to end all hope! :)
  • Noble Dust
    8k


    The point I'm trying to make is that "technology" has existed since the wheel. I see no evidence that we've "utilized technology to help save us from doom and gloom", for the past several thousand years.

    What happens instead is that, because we have a lot of hubris, we allow ourselves to be seduced by the exponentially-increasing technological chops of an elite few, and then assume that that elite power, held by a few, is some kind of religious balm that will heal the masses. It's fucking ludicrous.
  • Shawn
    13.3k

    Well, I think you have a point here. Personally, I don't believe in trickle down; but, productivity gains do exist through technology. I mean, it's hard to argue otherwise that life is more brutish than it was some hundred or two hundred years ago.
  • Noble Dust
    8k


    Then I guess it just brings up the question of what the telos of it all is. Increased productivity, for instance, is also morally neutral. Increased productivity to do what? Feed the hungry? Bomb the shit out of them?

    Is life more or less brutish than it was a hundred or two hundred years ago? Probably less brutish. Why is that? Is it just because of technology in-itself?
  • Shawn
    13.3k


    I've always been a closet utilitarian. My conception of what is good, is that people suffer less and enjoy life more. Though, I understand the contention with technology being a means to 'enjoy life more'. We have a lot of spare time, due to the increase in output in terms of productivity of say growing food, agriculture, and other means of labor for a common unit of exchange. I don't think technology is bad or good, it's what we make it to be what matters.

    When I said 'to each they're own' earlier in the thread, I meant it in the context of there arriving a day when people will be free to engage in any activity that they desire or think they would do best. Technology will eventually, in terms of having a benevolent AI in the not so distant future, provide for all our needs, and then well... noting much further than that. I guess people will be free from the need to engage in intensive labor. Then what?
  • Noble Dust
    8k
    I've always been a closet utilitarian. My conception of what is good, is that people suffer less and enjoy life more.Posty McPostface

    I'm not any kind of utilitarian, so I can't agree with this. It's probably a little tangential, but I see "the good" as an objective ideal that isn't attainable within the current state of the human condition. The human condition itself would have to change in a paradigmal way in order for the good to be attainable. So things like "suffering less" and "enjoying life more" are small details in the face of the actual good. What I see as extremely dangerous is the hubristic assumption that technology itself is the causeway which leads to a paradigmal shift in the state of the human condition. It's not so much a spoken philosophical position as it is a general zeitgeist, which is exactly what makes it dangerous. Our petty sweet nothings that we chide on about on this forum pale in comparison to the real, cultural shifts that are happening outside of our own prisonesque philosophies here.

    When I said 'to each they're own' earlier in the thread, I meant it in the context of there arriving a day when people will be free to engage in any activity that they desire or think they would do best.Posty McPostface

    The irony here, per what I've been saying all along, is that technology has absolutely nothing to do with this; it has nothing to do with our ability to freely engage in activity which we desire or think is best. The reality is that we already do this, and mostly we do it horribly, because we don't know what we're doing. That's the human condition. Again, technology is just a neutral means to achieve more power and productivity to do whatever it is we do; habits beneficial, destructive, malignant, etc.

    Technology will eventually, in terms of having a benevolent AI in the not so distant future, provide for all our needs, and then well... noting much further than that. I guess people will be free from the need to engage in intensive labor. Then what?Posty McPostface

    Why assume AI will be benevolent? Again, this is exceedingly simple, to me: humans are morally problematic; humans make technology; technology as an extension of human problematicness will be problematic; it has been; it continues to be; AI, therefore, will also be problematic. It's stupidly simple.
  • BC
    13.6k
    I'm not JesusPosty McPostface

    So no crucifixion then. Let's see, what are the current recommendations for hapless optimists?
  • BC
    13.6k
    Posty McPostface and Schopenhauer1:

    I owe both of you an apology. For some reason I thought this thread was started by Schop. Given his anti-natalist drive, the positive drift of the opening post suggested he had lurched to the opposite end of the spectrum and gone off the deep end.

    So it's just Posty. OK. All is well.
  • BC
    13.6k
    How do you think changes will occur, or what is your conception about the future as you see it?Posty McPostface

    If I discount the severe harms of global warming, over population, nuclear annihilation, and other near-terminal events, can I be up-beat about the future? Sort of.

    I don't see any singularity of AI, no help from some vastly superior and benevolent aliens, no 180º turn abouts, no evolutionary leaps. Our "doom" is the capacity for considerable intelligence yoked together with ancient, dominant emotions. Our intelligence and our emotions are good things, settled on us by a more or less indifferent evolutionary process, but their interplay became a lot more problematic when we attained more technical prowess than we could manage (see The Sorcerer's Apprentice).

    As long as we were dependent on horsepower and the firepower of cannons firing mere projectiles, we were protected by the limits of our grasp. As our grasp got closer to our reach in the 18th and 19th centuries, we became more dangerous to ourselves. The danger was fully revealed in the 20th century with the capacity (and the preparation) for mutually assured nuclear annihilation. That risk has quieted down (it didn't go away) only to be replaced by the realization that the Industrial Revolution had a much higher price tag than were previously aware of.

    Technology is fun and profitable and we have all embraced as much of it as we can get our hands on. In that we are just doing what we do.

    I see a brighter future for our species in retrograde development--rolling back, rather than rolling forward. We now have more technical complexity than we can manage. How far back would be a good idea? The Stone Age? Iron Age? Bronze Age? Roman Empire? Medieval period? Renaissance? Enlightenment? Pre-steam engine? Pre-photography? Pre-telegraph? Pre-telephone? Pre-recorded sound? Pre-radio? Pre-television?

    I'd be willing to stop with radio and film and forego television, the nuclear bomb, and Facebook. Maybe even Google. Oh, that would be hard. But we all have to make sacrifices if we are going to return to the past.

    People will (correctly) say that we can't go back in time. True enough. But we can't jump forward In time either, to some period when we are uploaded into the Cloud (and become the property of the then current Mark Fuckerburg). But it is easier to give up technology that we can't live with then hope for even vaster technology that we won't be able to live with.

    I don't have a television anymore, and haven't replaced it with watching TV online. By and large, I am living without network and cable television. Is it painful and difficult? No, it's not. It's actually quite pleasant and productive. Giving up TV gives me time to blather on here.

    Speaking of which, time to stop.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.