• Metaphysician Undercover
    12.3k
    Let me see if I properly apprehend your position. There's an independent world, but there's no shared access to it. Because of this lack of commonality in our distinct approaches to it, the independent world is theoretical. So I assume that this is not the independent world itself which is theoretical, but each individual's approach to it, which is theoretical. Each person has one's own theoretical independent world, but we still assume a real non-theoretical independent world which is separate from us.

    How do we validate "science" then? From what I understand, the principles of science are validated by empirical evidence, yet according to the principles above any such science must be made compatible with one's own theoretical independent world, to be accepted as correct. The theoretical independent world within each of us, provides the standard for judgement of what to accept or not to accept within the realm of science.

    So, when you say
    Science suggests I’m not special.David Pearce
    How do you justify this, or even find principles to accept it as having a grain of truth? You have described principles which make each and every individual person completely separate, distinct and unique, "special". Now you claim that science tells you that you are not special, and this is the basis for your claim that sentient beings everywhere disvalue agony and despair.

    Agony and despair are inherently disvaluable for me. Science suggests I’m not special. Therefore I infer that agony and despair are disvaluable for all sentient beings anywhere:David Pearce

    I do not believe that science suggests you are not special. I think science suggests exactly what you argued already, that we're all distinct, "special", each one of us having one's own distinct theoretical independent world. And I think that this generalization, that we are all somehow "the same", is an unjustified philosophical claim. So I think you need something stronger than your own personal feelings, that agony and despair are disvaluable to you, to support your claim that they are disvaluable to everyone.

    I think your claim to 'science" for such a principle is a little off track, because we really must consider what motivates human beings, intentions, and science doesn't yet seem to have a grasp on this. So we can see for example, that agony and despair (of others) is valued in some cases such as torture, and even in a more subtle sense, but much more common, as a negotiating tactic. People apply pressure to others, to get what they want. And if you believe that agony and despair ought not be valued like this, we need to defer to some higher moral principles to establish the right of that. Where are we going to get these higher moral principles when we deny the reality of true knowledge concerning the common world, the external world which we must share with each other?

    However, if we consider what people want, we can validate science and generalizations through "the necessities of life". Don't you not think that we need some firm knowledge, some truths, concerning the external world which provides us with the necessities of life, in order to produce agreement on moral principles? Would you agree that it is the external world which provides us with the necessities of life? Isn't this what we all have in common, and shouldn't this be our starting point for moral philosophy, the necessities of life which we must take from the external world, rather than your own personal feelings about agony and despair?
  • fdrake
    5.8k
    @David Pearce

    Hey, I have another question, are there any aspects of the current homo sapiens that you would identify as already transhuman?
  • David Pearce
    209
    What if gene-editing doesn't remove suffering but simply re-calibrates it in an unfavorable way?Outlander
    Compare the introduction of pain-free surgery:
    https://www.general-anaesthesia.com/
    Surgical anaesthesia isn't risk-free. Surgeons and anaesthesiologists need to weigh risk-reward ratios. But we recognise that reverting to the pre-anaesthetic era would be unthinkable. Posthumans will presumably reckon the same about reverting to pain-ridden Darwinian life.

    Consider again the SCN9A gene. Chronic pain disorders are a scourge. Hundreds of millions of people worldwide suffer terribly from chronic pain. Nonsense mutations of SCN9A abolish the ability to feel pain. Yet this short-cut to a pain-free world would be disastrous: we'd all need to lead a cotton-wool existence. However, dozens of other mutations of SCN9A confer either unusually high or unusually low pain thresholds. We can at least ensure our future children don’t suffer in the cruel manner of so many existing humans. Yes, there are lots of pitfalls. The relationship between pain-tolerance and risky behaviour needs in-depth exploration. So does the relationship between pain sensitivity and empathy; consider today's mirror-touch synaesthetes. However, I think the biggest potential risks of choosing alleles associated with elevated pain tolerance aren't so much the danger of directly causing suffering to the individual through botched gene therapy, but rather the unknown societal ramifications of creating a low-pain civilisation as a prelude to a no-pain civilisation. People behave and think differently if (like today's fortunate genetic outliers) they regard phenomenal pain as "just a useful signalling mechanism”. Exhaustive research will be vital.
  • David Pearce
    209
    Hey, I have another question, are there any aspects of the current homo sapiens that you would identify as already transhuman?fdrake
    Candidly, no. Until we edit our genomes, even self-avowed transhumanists will remain all too human. All existing humans run the same kind of egocentric world-simulation with essentially the same bodily form, sensory modes, hedonic range, core emotions, means of reproduction, personal relationships, life-cycle, maximum life-span, and default mode of ordinary waking consciousness as their ancestors on the African savannah. For sure, the differences between modern and archaic humans are more striking than the similarities; but I suspect we have more in common with earthworms than we will with our genetically rewritten posthuman successors.
  • David Pearce
    209
    Researchers at the Francis Crick Institute have revealed that CRISPR-Cas9 genome editing can lead to unintended mutations at the targeted section of DNA in early human embryos.Olivier5
    A hundred-year moratorium on reckless genetic experimentation would be good; but antinatalism will always be a minority view. Instead, prospective parents should be encouraged to load the genetic dice in favour of healthy offspring by making responsible genetic choices. Base-editing is better than CRISPR-Cas9 for creating invincibly happy, healthy babies:
    https://www.labiotech.eu/interview/base-editing-horizon-discovery/
  • ChatteringMonkey
    1.3k
    Hi David,

    I have a (maybe) straightforward question regarding the value assumptions in negative utilitarianism, or even utilitarianism in general, and the possible consequences thereof.

    My intuition against utilitarianism always has been that pain, or even pain 'and' pleasure, is not the only thing that matters to us, and so reducing everything to that runs the risk of glossing over other things we value.

    Would you say that is just factually incorrect, i.e. scientific research tells us that in fact everything is reducible to pain (or more expanded to pain/pleasure)?

    And if it's not the case that everything is reducible to pain/pleasure, wouldn't genetic alteration solely with the purpose of abolition of pain, run the risk of impoverishing us as human beings? Do we actually have an idea already of how pain and pleasure are interrelated (or not) with the rest of human emotions in the sense that it would be possible in principle to remove pain and keep all the rest intact?

    Thank you for your time, It's been an interesting thread already.
  • David Pearce
    209
    I don’t even know if other people are capable of suffering let alone that I have some kind of weird abstract reason to care about it. It seems to me that the reasons that we might have to minimize the suffering of others are almost just as speculative...TheHedoMinimalist
    You say you're "mostly an ethical egoist". Do you accept the scientific world-picture? Modern civilisation is based on science. Science says that no here-and-nows are ontologically special. Yes, one can reject the scientific world-picture in favour of solipsism-of-the-here-and-now (cf. https://www.hedweb.com/quora/2015.html#idsolipsism). But if science is true, then solipsism is a false theory of the world. There’s no reason to base one’s theories of ethics and rationality on a false theory. Therefore, I believe that you suffer just like me. I favour the use of transhumanist technologies to end your suffering no less than mine. Granted, from my perspective your suffering is theoretical. Yet its inaccessibility doesn't make it any less real. Am I mistaken to act accordingly?
  • David Pearce
    209

    Thank you. Evolution via natural selection has encephalised our emotions, so we (dis)value many things beyond pain and pleasure under that description. If intentional objects were encephalised differently, then we would (dis)value them differently too. Indeed, our (dis)values could grotesquely be inverted – “grotesquely” by the lights of our common sense, at any rate.

    What's resistant to inversion is the pain-pleasure axis itself. One simply can't value undergoing agony and despair, just as one can't disvalue experiencing sublime bliss. The pain-pleasure axis discloses the world's inbuilt metric of (dis)value.

    However, it's not necessary to accept this analysis to recognise the value of phasing out the biology of involuntary suffering. Recalibrating your hedonic treadmill can in principle leave your values and preference architecture intact – unless one of your preferences is conserving your existing (lower) hedonic range and (lower) hedonic set-point. In practice, a biohappiness revolution is bound to revolutionise many human values – and other intentional objects of the Darwinian era. But bioconservatives who fear their values will necessarily be subverted by the abolition of suffering needn’t worry. Even radical elevation of your hedonic set-point doesn't have to subvert your core values any more than hedonic recalibration would change your football-team favourite. What would undermine your values is uniform bliss – unless you're a classical utilitarian for whom uniform orgasmic bliss is your ultimate cosmic goal. Life based on information-sensitive gradients of intelligent well-being will be different. What guises this well-being will take, I don't know.
  • ChatteringMonkey
    1.3k
    Thank you. Evolution via natural selection has encephalised our emotions so we (dis)value many things beyond pain and pleasure under that description. If intentional objects were encephalised differently, then we would (dis)value them differently too. Indeed, our (dis)values could grotesquely be inverted – “grotesquely” by the lights of (our) common sense, at any rate.

    What's resistant to inversion is the pain-pleasure axis itself. One simply can't value undergoing agony and despair, just as one can't disvalue experiencing sublime bliss. The pain-pleasure axis discloses the world's inbuilt metric of (dis)value.
    David Pearce

    Thank you for the response. I hope you don't mind a follow up question, because this last paragraph is something I don't quite fully get yet.

    I understand that many of our emotions and values are a somewhat arbitrary result of evolution. And I don't really have a fundamental bioconservative objection to altering them, because indeed they could easily have been be otherwise. What puzzles me is how you think we can go beyond our own biology and re-evaluate it for the purpose of genetic re-engineering. Since values are not ingrained in the fabric of the universe (or maybe I should say the part of the universe that is not biological), i.e. it is something we bring to the table, from what perspective are we re-evaluating them then. You seem to be saying there is something fundamental about pain and pleasure, because it is lifes (or actually the worlds?) inbuilt metric of value... It just isn't entirely clear to me why.

    To make this question somewhat concrete. wouldn't it to be expected, your and other philosophers efforts notwithstanding, that in practice genetic re-engineering will be used as a tool for realising the values we have now? And by 'we' I more often then not mean political and economic leaders who ultimately have the last say because they are the ones financing research. I don't want to sound alarmist, but can we really trust something with such far-reaching consequence as a toy in power and status games?
  • TheHedoMinimalist
    460
    You say you're "mostly an ethical egoist". Do you accept the scientific world-picture? Modern civilisation is based on science. Science says that no here-and-nows are ontologically special. Yes, one can reject the scientific world-picture in favour of solipsism-of-the-here-and-now (cf. https://www.hedweb.com/quora/2015.html#idsolipsism). But if science is true, then solipsism is a false theory of the world.David Pearce

    I agree that solipsism is likely to be false and I think it’s more likely than not that you are capable of suffering. I was just bringing up the Epistemic problem that we seem to have regarding knowing that other people are capable of suffering and that the Epistemic problem doesn’t seem to exist for egoists. I don’t think the possibility of many other people not being capable of experiencing suffering is too far fetched though. It doesn’t necessarily require you to accept solipsism. I think there are other ways to argue for that conclusion. For example, take the view held by some philosophers that we might be living in a simulation. There might be conscious minds outside of that simulation but you couldn’t do anything to reduce suffering of those other conscious minds so you might as well just reduce your own suffering. In addition, it’s possible that it is the case that most but not all people are p zombies.

    I also want to point out that the possibility of these metaphysical views being true isn’t really the primary argument that I have for egoism. Rather, they are secondary consideration which should still give someone a reason to believe that we have additional reason to prioritize our own welfare in some minor way just in case those views turn out to be true. For example, say that I think there is roughly a 10% chance that those views are true. Wouldn’t this mean that I have an additional reason to prioritize my own suffering more by 10%?

    Granted, from my perspective your suffering is theoretical. Its inaccessibility doesn't make it any less real. Am I mistaken to act accordingly?David Pearce

    I would say that it depends on what reasons you have for accepting hedonism. I accept hedonism primarily because of how robust that theory seems to be against various forms of value skepticism and value nihilism. Imagine that you were talking to a philosopher who says that he isn’t convinced that suffering is intrinsically bad. What would your response be in that situation? My response would probably be along the lines of asking that person to put his hand on a hot stove and see if he still thinks there’s nothing intrinsically bad about suffering. I think there is a catch with this sort of response though. This is because I’m appealing only to how he can’t help but think that his own suffering is bad. I can’t provide him with an example that involves someone else putting a hand on a hot stove because that would have no chance of eliminating his skepticism.

    Of course, one might object that we shouldn’t really be that skeptical about value claims and we should lower the Epistemic threshold for reasonably holding beliefs about ethics and value. But, then I think there would have to be an additional explanation given for why hedonism would still be the most plausible theory as it not completely clear to me as to why having the intrinsic aim of minimizing suffering in the whole world is more reasonable than another kind of more abstract and speculative intrinsic aim like the intrinsic aim of minimizing instances of preference frustrations for example. In addition, even if we lower the threshold for reasonable ethical belief, it still seems that we should care more about our own suffering just in case that altruism turns out to be false. Altruism seems to encapsulate egoism in a sense that pretty much every altruist agrees that it is good to minimize your own suffering but egoists usually think that minimizing the suffering of others is just a complete waste of time.
  • Noble Dust
    7.8k
    The only way to develop a scientific knowledge of consciousness is to adopt the experimental method.David Pearce

    Are you familiar with Rudolf Steiner? I have plenty of reservations about him, but would be curious if you have any thoughts. It's interesting to note that, despite his eccentricities, perhaps the main thing he put forth that still is working (arguably) well is biodynamics.

    Transhumanism can treat our endogenous opioid addiction by ensuring that gradients of lifelong bliss are genetically hardwired.David Pearce

    I'm admittedly a bit behind on transhumanist thought (so why am I posting here?) but the idea of being able to hardwire gradients of bliss smacks of hubris to me. I think of Owen Barfield's analogy of the scientific method in which he describes "engine knowledge" vs. "driver knowledge". A car mechanic understands how the engine works, and what needs to be fixed in order for the motor to work (scientist). A driver of the car understands why the engine needs to work properly: to take the driver from point A to point B. There are countless reasons why a driver might need to go from point A to point B. The driver is not a scientist, by the way; maybe an artist, or maybe a humble average Joe who just wants to provide for his family. Who is wiser? [can provide reference to this Barfield analogy, will just take a few minutes of digging].

    In my mind, you have engine knowledge. I have driver knowledge.
  • Olivier5
    6.2k
    Base-editing is better than CRISPR-Cas9 for creating invincibly happy, healthy babies:David Pearce

    It seems to me that you are indulging in a vision of a paradise, that probably serves the same purpose as the Christian paradise: console, bring solace. It's a form of escapism.


    Isaiah 11:

    6 The wolf will live with the lamb,
    the leopard will lie down with the goat,
    the calf and the lion and the yearling together;
    and a little child will lead them.
    7 The cow will feed with the bear,
    their young will lie down together,
    and the lion will eat straw like the ox.
    8 The infant will play near the cobra’s den,
    and the young child will put its hand into the viper’s nest.
    9 They will neither harm nor destroy
    on all my holy mountain,
    for the earth will be filled with the knowledge of the Lord
    as the waters cover the sea.
  • David Pearce
    209
    It seems to me that you are indulging in a vision of a paradise, that probably serves the same purpose as the Christian paradise: console, bring solace. It's a form of escapism.Olivier5
    A plea to write good code so future sentience doesn't suffer isn't escapism; it's a policy proposal for implementing the World Health Organization's commitment to health. Future generations shouldn't have to undergo mental and physical pain.

    The "peaceable kingdom" of Isaiah?
    The reason for my quoting the Bible isn't religious leanings: I'm a secular scientific rationalist. Rather, bioconservatives are apt to find the idea of veganising the biosphere unsettling. If we want to make an effective case for compassionate stewardship, then it's good to stress that the vision of cruelty-free world is venerable. Quotes from Gautama Buddha can be serviceable too. Only the implementation details (gene-editing, synthetic gene drives, cross-species fertility-regulation, etc) are a modern innovation.
  • Olivier5
    6.2k
    Only the implementation details (gene-editing, synthetic gene drives, cross-species fertility-regulation, etc) are a modern innovation.David Pearce

    Exactly. Yahweh has given way to Science as the Supreme Being in many modern minds, and therefore science is now required to deliver the same stuff that Yahweh was previously in charge of, including paradise. The Kingdom of God has been replaced in our imagination by the Kingdom of Science.
  • David Pearce
    209
    the idea of being able to hardwire gradients of bliss smacks of hubris to me.Noble Dust
    Creating new life and suffering via the untested genetic experiments of sexual reproduction feels natural. Creating life engineered to be happy – and repairing the victims of previous genetic experiments – invites charges of “hubris”. Antinatalists might say that bringing any new sentient beings into this god-forsaken world is hubristic. But if we accept that the future belongs to life lovers, then who shows greater humility:
    (1) prospective parents who trust that quasi-random genetic shuffling will produce a benign outcome?
    (2) responsible (trans)humans who ensure their children are designed to be healthy and happy?
  • David Pearce
    209
    it not completely clear to me as to why having the intrinsic aim of minimizing suffering in the whole world is more reasonable than another kind of more abstract and speculative intrinsic aim like the intrinsic aim of minimizing instances of preference frustrations for exampleTheHedoMinimalist
    The preferences of predator and prey are irreconcilable. So are trillions of preferences of social primates. The challenge isn't technological, but logical. Moreover, even if vastly more preferences could be satisfied, hedonic adaptation would ensure most sentient beings aren't durably happier. Hence my scepticism about "preference utilitarianism", a curious oxymoron. Evolution designed Darwinian malware to be unsatisfied. By contrast, using biotech to eradicate the molecular signature of experience below hedonic zero also eradicates subjectively disvaluable states. In a world animated entirely by information-sensitive gradients of well-being, there will presumably still be unfulfilled preferences. There could still, optionally, be social, economic and political competition – even hyper-competition – though one may hope full-spectrum superintelligence will deliver superhuman cooperative problem-solving prowess rather than primate status-seeking. Either way, a transhuman world without the biology of subjective disvalue would empirically be a better world for all sentience. It’s unfortunate that the goal of ending suffering is even controversial.
  • David Pearce
    209
    You seem to be saying there is something fundamental about pain and pleasure, because it is life's (or actually the world's?) inbuilt metric of value... It just isn't entirely clear to me why.ChatteringMonkey
    IMO, asking why agony is disvaluable is like asking why phenomenal redness is colourful. Such properties are mind-dependent and thus (barring dualism) an objective, spatio-temporally located feature of the natural world:
    https://www.hedweb.com/quora/2015.html#metaethics
    Evolution doesn’t explain such properties, as distinct from when and where they are instantiated. Phenomenal redness and (God forbid) agony could be created from scratch in a mini-brain, i.e. they are intrinsic properties of some configurations of matter and energy. It would be nice to know why the solutions to the world's master equation have the textures they do; but in the absence of a notional cosmic Rosetta stone to "read off" their values, it's a mystery.

    wouldn't it to be expected, your and other philosophers efforts notwithstanding, that in practice genetic re-engineering will be used as a tool for realising the values we have now? And by 'we' I more often then not mean political and economic leaders who ultimately have the last say because they are the ones financing research. I don't want to sound alarmist, but can we really trust something with such far-reaching consequence as a toy in power and status games?ChatteringMonkey
    I've no short, easy answer here. But fast-forward to a time later this century when approximate hedonic range, hedonic set-points and pain-sensitivity can be genetically selected – both for prospective babies and increasingly (via autosomal gene therapy) for existing humans and nonhuman animals. Anti-aging inteventions and intelligence-amplification will almost certainly be available too, but let's focus on hedonic tone and the pleasure-pain axis. What genetic dial-settings will prospective parents want for their children? What genetic dial settings and gene-expression profiles will they want for themselves? Sure, state authorities are going to take an interest too. Yet I think the usual sci-fi worries of, e.g. some power-crazed despot breeding of a caste of fearless super-warriors (etc), are misplaced. Like you, I have limited faith in the benevolence of the super-rich. But we shouldn't neglect the role of displays of competitive male altruism. Also, one of the blessings of information-based technologies such as gene-editing is that once the knowledge is acquired, their use won't be cost-limited for long. Anyhow, I'm starting to sing a happy tune, whereas there are myriad ways things could go wrong. I worry I’m sounding like a propagandist rather than an advocate. But I think the basic point stands. Phasing out hedonically sub-zero experience is going to become technically feasible and eventually technically trivial. Humans may often be morally apathetic, but they aren't normally malicious. If you had God-like powers, how much involuntary suffering would you conserve in the world? Tomorrow's policy makers will have to grapple with this kind of question.
  • ChatteringMonkey
    1.3k
    IMO, asking why agony is disvaluable is like asking why phenomenal redness is colourful. Such properties are mind-dependent and thus (barring dualism) an objective, spatio-temporally located feature of the natural world:
    https://www.hedweb.com/quora/2015.html#metaethics
    David Pearce

    I certainly won't deny the fact that pain is real for me and other people, and I wouldn't even deny that pain is inherently disvaluable, where I would want to push back is that it needs to be the only thing we are concerned with. It seems more complicated to me, but maybe this is more a consequence of my lack of knowledge on the subject, I don't know.

    Here's an example you probably heard many times before, sports. We seem to deliberately seek out and endure pain to attain some other values, such as fitness, winning or looking good... I wouldn't say we value the pain we endure during sports, but it does seem to be the case that sometimes we value other things more than we disvalue pain. So how would you reconcile this kind of behavior with pain/pleasure being the inbuilt metric of (dis)value?

    Do you make a difference between (physical) pain and suffering? To me there seems be something different going on with suffering, something different from the mere experience of pain. There also seems a mental component where we suffer because of anticipating bad things, because we project ourselves into the future... This would also be the reason why I would make a difference between humans and most other animals because they seems to lack the ability to project further into the future. To be clear by making this distinction, I don't want to imply that we shouldn't treat animals vastly better than we do now, just that I think there is a difference between 'experience of pain in the moment' and 'suffering' which possibly could have some ethical ramifications.

    I've no short, simple, easy answer here. But fast-forward to a time later this century when approximate hedonic range, hedonic set-points and pain-sensitivity can be genetically selected – both for new babies and increasingly (via autosomal gene therapy) for existing humans and nonhuman animals. Anti-aging inteventions and intelligence-amplification will almost certainly be available too, but let's focus on hedonic tone and the pleasure-pain axis. What genetic dial-settings will prospective parents want for their children? What genetic dial settings and gene-expression profiles will they want for themselves? Sure, state authorities are going to take an interest too. Yet I think the usual sci-fi worries of, e.g. some power-crazed despot breeding of a caste of fearless super-warriors (etc), are misplaced. Like you, I have limited faith in the benevolence of the super-rich. But we shouldn't neglect the role of displays of competitive male altruism. Also, one of the blessings of information-based technologies such as gene-editing is that once the knowledge is acquired, their use won't be cost-limited for long. Anyhow, I'm starting to sing a happy tune, whereas there are myriad ways things could go wrong. I worry I’m sounding like a propagandist rather than an advocate. But I think the basic point stands. Phasing out hedonically sub-zero experience is going to become technically feasible and eventually technically trivial. Humans may often be morally apathetic, but we aren't normally malicious. If you had God-like powers, how much involuntary suffering would you conserve in the world? Tomorrow's policy makers will have to grapple with this kind of decision.David Pearce

    I actually agree with most of this. Ideally I wouldn't want these kind of powers because they seem way beyond the responsibilities a chattering monkey can handle. But if it can be done, it probably will be done... and given the state and prospects of science at this moment, it seems likely it will be done at some point in the future, whether we want it or not. And since we presumably will have that power, I suppose it's hard to deny the responsibility that comes with that. So philosophers might as well try and figure out how to best go about that when it does happen, I can certainly support that effort. What I would say is that I would want to understand a whole lot better how it all exactly operates before making definite claims about the direction of our species and the biosphere. But I'm not the one doing the research, so I can't really judge how good we understand it already.
  • Down The Rabbit Hole
    516
    David, at some point after implementation of the technology (maybe after 300 years, maybe after 1000 years, maybe after a million years) it is bound to be used to cause someone an unnatural amount of suffering. Suffering that is worse than could be experienced naturally.

    Is one or two people being treated to an unnatural amount of suffering (at any point in the future) worth it to provide bliss for the masses? Shouldn't someone that would walk away from Omelas walk away from this technology?
  • TheHedoMinimalist
    460
    The preferences of predator and prey are irreconcilable. So are trillions of preferences of social primates. The challenge isn't technological, but logical.David Pearce

    I don’t see how that’s a problem for PU though. I think they could easily respond to this concern by simply stating that we regrettably have to sacrifice the preferences of one group to fulfill the preferences of a more important group in these sorts of dilemmas. I also think this sort of thing applies to hedonism also. In this dilemma, it seems we would also have to choose between prioritizing reducing the suffering of the predator or prioritizing reducing the suffering of the prey.

    Either way, a transhuman world without the biology of subjective disvalue would empirically be a better world for all sentience.David Pearce

    Well, I’m curious to know what reasons do you think that we have to care specifically about the welfare of sentient creatures and not other kinds of entities though. There are plenty of philosophers that have claimed that various non-sentient entities are legitimate intrinsic value bearers as well. For example, I’ve heard claims that there’s intrinsic value in the survival of all forms of life including non-sentient life like plants and fungi. I’ve also heard claims that AI programs could have certain achievements which are valuable for their own sake like the achievements of a chess playing neural network AI that taught itself how to play chess by playing chess with itself billions of times and that could now beat every human player in the world. I’m just wondering what makes you think that sentient life is more special than those other sorts of entities. The main reason that I can think of for thinking sentient beings are the only type of value bearers seem to appeal to the truthfulness of egoism.
  • David Pearce
    209
    David, at some point after implementation of the technology (maybe after 300 years, maybe after 1000 years, maybe after a million years) it is bound to be used to cause someone an unnatural amount of suffering. Suffering that is worse than could be experienced naturally.

    Is one or two people being treated to an unnatural amount of suffering (at any point in the future) worth it to provide bliss for the masses? Shouldn't someone that would walk away from Omelas walk away from this technology?
    Down The Rabbit Hole
    Yes, anyone who understands suffering should "walk away from Omelas". If the world had an OFF switch, then I'd initiate a vacuum phase-transition without a moment's hesitation. But there is no OFF switch. It's fantasy. Its discussion alienates potential allies. Other sorts of End-of-the-World scenarios are fantasy too, as far as I can tell. For instance, an AI paperclipper (cf. https://www.hedweb.com/quora/2015.html#dpautistic) would align with negative utilitarian (NU) values; but paperclippers are impracticable too. One of my reasons for floating the term "high-tech Jainism" was to debunk the idea that negative utilitarians are plotting to end life rather than improve it. For evolutionary reasons, even many depressives are scared of death and dying. As a transhumanist, I hope we can overcome the biology of aging. So I advocate opt-out cryonics and opt-in cryothanasia to defang death and bereavement for folk who fear they won't make the transition to indefinite youthful lifespans. This policy proposal doesn’t sound very NU – why conserve Darwinian malware? – but ending aging / cryonics actually dovetails with a practical NU ethic.

    S(uffering)-risks? The s-risk I worry about is the possibility that a technical understanding of suffering and its prevention could theoretically be abused instead to create hyperpain. So should modern medicine not try to understand pain, depression and mental illness for fear the knowledge could be abused to create something worse? After all, human depravity has few limits. It's an unsettling thought. Mercifully, I can't think of any reason why anyone, anywhere, would use their knowledge of medical genetics to engineer a hyper-depressed, hyperpain-racked human. By contrast, the contemporary biomedical creation of "animal models" of depression is frightful.

    Is it conceivable that (trans)humans could phase out the biology of suffering and then bring it back? Well, strictly, yes: some philosophers have seriously proposed that an advanced civilisation that has transcended the horrors Darwinian life might computationally re-create that life in the guise of running an "ancestor simulation". Auschwitz 2.0? Here, at least, I’m more relaxed. I don't think ancestor simulations are technically feasible – classical computers can't solve the binding problem. I also discount the possibility that superintelligences living in posthuman heaven will create pockets of hell anywhere in our forward light-cone. Creating hell – or even another Darwinian purgatory – would be fundamentally irrational.

    Should a proponent of suffering-focused ethics spend so much time exploring transhumanist futures such as quasi-immortal life animated by gradients of superhuman bliss? Dreaming up blueprints for paradise engineering can certainly feel morally frivolous. However, most people just switch off if one dwells entirely on the awfulness of Darwinian life. My forthcoming book is entitled “The Biohappiness Revolution" – essentially an update on “The Hedonistic Imperative”. Negative utilitarianism, and maybe even the abolitionist project, is unsaleable under the latter name. Branding matters.
  • Michael
    14k
    I don't think ancestor simulations are technically feasible – classical computers can't solve the binding problem.David Pearce

    Surely if something like the human brain and its resultant consciousness can come into being "by chance" (via evolution by natural selection over millions of years), then shouldn't it be possible for intelligent life (with the help of advanced computers) to artificially create something similar? And given that things like dreams where there's little to no external stimuli involved are possible then shouldn't artificially stimulated experiences also be possible?
  • David Pearce
    209

    Artificial mini-brains, and maybe one day maxi-minds, are technically feasible. What's not possible, on pain of spooky "strong" emergence, is the creation of phenomenally-bound subjects of experience in classical digital computers:
    https://www.binding-problem.com/
    https://www.hedweb.com/hedethic/binding-interview.html
  • David Pearce
    209
    I don’t see how that’s a problem for PU though. I think they could easily respond to this concern by simply stating that we regrettably have to sacrifice the preferences of one group to fulfill the preferences of a more important group in these sorts of dilemmas. I also think this sort of thing applies to hedonism also. In this dilemma, it seems we would also have to choose between prioritizing reducing the suffering of the predator or prioritizing reducing the suffering of the prey.TheHedoMinimalist
    Even if we prioritise, preference utilitarianism doesn’t work. Well-nourished tigers breed more tigers. An exploding tiger population then has more frustrated preferences. The swollen tiger population starves in consequences of the dwindling numbers of their prey. Prioritising herbivores from being predated doesn’t work either – at least, not on its own. As well as frustrating the preferences of starving predators, a population explosion of herbivores would lead to mass starvation and hence more even frustrated preferences. Insofar as humans want ethically to conserve recognisable approximations of today’s "charismatic mega-fauna", full-blown compassionate stewardship of Nature will be needed: reprogramming predators, cross-species fertility-regulation, gene drives, robotic “AI nannies” – the lot. From a utilitarian perspective (cf. https://www.utilitarianism.com), piecemeal interventions to solve the problem of wild animal suffering are hopeless.

    I’m curious to know what reasons do you think that we have to care specifically about the welfare of sentient creatures and not other kinds of entitiesTheHedoMinimalist
    Mattering is a function of the pleasure-pain axis. Empirically, mattering is built into the very nature of the first-person experience of agony and ecstasy. By contrast, configurations of matter and energy that are not subjects of experience have no interests. Subjectively, nothing matters to them. A sentient being may treat them as significant, but their importance is only derivative.
  • Michael
    14k
    Artificial mini-brains, and maybe one day maxi-minds, are technically feasible. What's not possible, on pain of spooky "strong" emergence, is the creation of phenomenally-bound subjects of experience in classical digital computers:David Pearce

    Perhaps not, but I don't think that the use of classical digital computers is central to any ancestor simulation hypothesis. Don't such theories work with artificial mini-brains?
  • David Pearce
    209
    Perhaps not, but I don't think that the use of classical digital computers is central to any ancestor simulation hypothesis. Don't such theories work with artificial mini-brains?Michael
    One may consider the possibility that one could be a mind-brain in a neurosurgeon’s vat in basement reality rather than a mind-brain in a skull as one naively supposes. However, has one any grounds for believing that this scenario is more likely? Either way, this isn’t the Simulation Hypothesis as envisaged in Nick Bostrom’s original Simulation Argument (cf. https://www.simulation-argument.com/).
    What gives the Simulation Argument its bite is that, pre-reflectively at any rate, running a bunch of ancestor-simulations is the kind of cool thing an advanced simulation might do. And if you buy the notion of digital sentience, and the idea that what you’re now experiencing is empirically indistinguishable from what your namesake in multiple digital ancestor-simulations is experiencing, then statistically you are more likely to be in one of the simulations than in the original.

    Elon Mask puts the likelihood that we’re living in primordial basement reality at billions-to-one against. But last time I asked, Nick doesn't assign a credence to the Simulation Hypothesis of more than 20%.

    I think reality has only one level and we’re patterns in it.
  • David Pearce
    209
    where I would want to push back is that it needs to be the only thing we are concerned with.... We seem to deliberately seek out and endure pain to attain some other values, such as fitness, winning or looking good... I wouldn't say we value the pain we endure during sports, but it does seem to be the case that sometimes we value other things more than we disvalue pain. So how would you reconcile this kind of behavior with pain/pleasure being the inbuilt metric of (dis)value?ChatteringMonkey

    I think the question to ask is why we nominally (dis)value many intentional objects that are seemingly unrelated to the pleasure-pain axis. "Winning” and demonstrating one is a dominant alpha male who can stoically endure great pain and triumph in competitive sports promises wider reproductive opportunities than being a milksop. And for evolutionary reasons, mating, for most males, is typically highly rewarding. We see the same in the rest of the Nature too. Recall the extraordinary lengths some nonhuman animals will go to breed. What’s more, if (contrary to what I’ve argued) there were multiple axes of (dis)value rather than a sovereign pain-pleasure axis, then there would need to be some kind of meta-axis of (dis)value as a metric to regulate trade-offs.

    You may or may not find this analysis persuasive; but critically, you don't need to be a psychological hedonist, nor indeed any kind of utilitarian, to appreciate it will be good to use biotech to end suffering.
  • ChatteringMonkey
    1.3k
    I think the question to ask is why we nominally (dis)value many intentional objects that are seemingly unrelated to the pleasure-pain axis. "Winning” and demonstrating one is a dominant alpha male, who can stoically endure great pain and triumph in competitive sports, promises wider reproductive opportunities than being a milksop. And for evolutionary reasons, mating is typically highly rewarding. We see the same in the rest of the Nature too. Recall the extraordinary lengths some nonhuman animals will go to in order to breed. What’s more, if (contrary to what I’ve argued) there were multiple axes of (dis)value rather than a sovereign pain-pleasure axis, then there would need to be some kind of meta-axis of (dis)value as a metric to regulate trade-offs.David Pearce

    Yes, Nietzsche for instance chose 'health/life-affirmation' as his preferred meta-axis to re-evaluate values, you seem to favor pain/pleasure... People seem to disagree on what is more important. But maybe there's a way, informed among other things by contemporary science, to get to more of an objective measure, I don't know... which is why I asked.

    You may or may not find this analysis persuasive; but critically, you don't need to be a psychological hedonist, nor indeed any kind of utilitarian, to appreciate it will be good if we can use biotech to end suffering.David Pearce

    This is certainly a fair point, but I'd add while it would be good to end (or at least reduce) suffering, It needn't be restricted to that. If we are going to use biotech to improve humanity, we might as well look to improve it on multiple axis... a multivalent approach.
  • David Pearce
    209

    Some of the soul-chilling things Nietzsche said make him sound as though he had an inverted pain-pleasure axis: https://www.nietzsche.com
    In reality, Nietzsche was in thrall to the axis of (dis)value no less the most ardent hedonist.

    In any event, transhumanists aspire to a civilisation of superintelligence and superlongevity as well as superhappiness – and many more "supers" besides.
  • David Pearce
    209
    I do not believe that science suggests you are not special. I think science suggests exactly what you argued already, that we're all distinct, "special", each one of us having one's own distinct theoretical independent world. And I think that this generalization, that we are all somehow "the same", is an unjustified philosophical claim. So I think you need something stronger than your own personal feelings, that agony and despair are disvaluable to you, to support your claim that they are disvaluable to everyone.Metaphysician Undercover
    I am not without idiosyncrasies. But short of radical scepticism, the claim that agony and despair are disvaluable by their very nature is compelling. If you have any doubt, put your hand in a flame. Animals with a pleasure-pain axis have the same strongly evolutionary conserved homologous genes, neurological pathways and neurotransmitter systems for pleasure- and pain-processing, and the same behavioural response to noxious stimuli. Advanced technology in the form of reversible thalamic bridges promises to make the conjecture experimentally falsifiable too (cf. https://www.nytimes.com/2011/05/29/magazine/could-conjoined-twins-share-a-mind.html). Reversible thalamic bridges should also allow partial “mind-melding” between individuals of different species.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet