Moral realism asks whether “moral claims do purport to report facts and are true if they get those facts right” (Sayre-McCord 2015). While some philosophers propose moral objectivism is distinct from realism, arguing that “things are morally right, wrong, good, bad etc. irrespective of what anybody thinks of them” (Pölzler and Wright 2019: 1), it is generally taken that moral realism implies moral objectivism (Björnsson 2012). Accordingly, I treat them similarly, and use moral realism as a catch-all term. Thus far, human intelligence and tools have been unable to conclusively confirm the truth of moral realism. However, a sufficiently advanced intelligence, like an AGI, may be capable of doing so, provided the notion of ‘solving’ the puzzle is coherent in the first place. I expect to consider the existence of moral facts plausible. I base this prediction on the popularity of moral realist (56%) over anti-realist (28%) meta-ethics from the same philosopher’s survey (Bourget and Chalmers 2014). I make no prediction with respect to technicians, and consider the prediction re: tentative.
...
The AGI ethics debate is primarily concerned with mitigating existential risk—technicians and PADs both agree on this. Both groups confidently predict AGI will be the most ethically consequential technology ever created. Accordingly, a suitably proactive response demands greater funding, research and input from the academy, governments, and the private sector. Technicians and PADs both endorse a highly precautionary approach, deemphasizing the moral and humanitarian well-being an AGI could provide to first focus on preventing worst-case scenarios. Addressing these black swans has stimulated a demand for better knowledge in two adjacent research areas. First, a model of what consciousness is, a theory of its development, and a sense of whether non-organic entities or systems can become conscious. Second, teasing out broader similarities and differences between organic and mechanical systems, so the utility, generalizability, and predictive power of empirical trends can be better assessed. The case of AGI highlights agreement between PADs and technicians on humanity’s extensive moral obligation towards future generations. In an age of multiple existential risks, including those unrelated to AGI, e.g., climate change, humans today bear direct responsibility for minimizing either termination or increased suffering of future lives.
When it comes to the possibility of AGI I am most concerned with the ethical/moral foundations we lay down for it. The reason being that once this system surpasses human comprehension we have no way to understand it, change its course or know where it may lead to.
The solution seems to be either hope beyond hope that we, or it, can discover Moral Truths (Moral Realism) or that we can splice together some form of ethical framework akin to Asimov's 'Three Laws of Robotics' — I like sushi
I would also argue we should hope for a conscious system rather than some abstraction that we have no hope to communicate with. A non-conscious free-wheeling system that vastly surpasses human intelligence is a scary prospect if we have no direct line to communication with it (in any human intelligible sense). — I like sushi
Also: Asimov's 'Three Laws of Robotics' were deficient, and he pointed out the numerous contradictions and problems in his own writings. So, it seems to me we need something much better than that. I would have no idea where to start apart from what I have written above, which is definitely not sufficient. — ToothyMaw
If you are suggesting we root our ethical foundations for AGI in moral facts: even if we or the intelligences we might create could discover some moral facts, what would compel any superintelligences to abide those facts given they have already surpassed us analytically? What might an AGI see when it peers into the moral fabric of the universe and how might that change its - or others' - behavior? And what if we do discover these moral facts and they are so repugnant or detrimental to humanity that we wish not to abide them ourselves? — ToothyMaw
If you are suggesting we root our ethical foundations for AGI in moral facts: even if we or the intelligences we might create could discover some moral facts, what would compel any superintelligences to abide those facts given they have already surpassed us analytically? What might an AGI see when it peers into the moral fabric of the universe and how might that change its - or others' - behavior? And what if we do discover these moral facts and they are so repugnant or detrimental to humanity that we wish not to abide them ourselves?
— ToothyMaw
I think you are envisioning some sentient being here. I am not. There is nothing to suggest AI or AGI will be conscious. AGI will just have the computing capacity to far outperform any human. I am not assuming sentience on any level (that is the scary thing). — I like sushi
Well, yeah. That is part of the major problem I am highlighting here. Anyone studying ethics should have this topic at the very forefront of their minds as there is no second chance with this. The existential threat to humanity could be very real if AGI comes into being. — I like sushi
It is pretty much like handing over all warhead capabilities to a computer that has no moral reasoning and saying 'only fire nukes at hostile targets'. — I like sushi
I'm pretty certain AGI, or strong AI, does indeed refer to sentient intelligences, but I'll just go with your definition. — ToothyMaw
Making AI answerable to whatever moral facts we can compel it to discover doesn't resolve the threat to humanity, however, but rather complicates it.
Like I said: what if the only discoverable moral facts are so horrible that we have no desire to follow them? What if following them would mean humanity's destruction? — ToothyMaw
Do you see what I mean? — I like sushi
If AGI hits then it will grow exponentially more and more intelligent than humans. If there is no underlying ethical framework then it will just keep doing what it does more and more efficiently, while growing further and further away from human comprehension. — I like sushi
I guess there is the off chance of some kind of cyborg solution — I like sushi
That sounds kind of horrific. — ToothyMaw
I would also argue we should hope for a conscious system rather than some abstraction that we have no hope to communicate with. A non-conscious free-wheeling system that vastly surpasses human intelligence is a scary prospect if we have no direct line to communication with it (in any human intelligible sense) — I like sushi
I don't think we can "program" AGI so much as train it like we do children and adolescents, mostly, learning from stories and by example ( :yikes: ) ... similarly to how we learn 'language games' from playing them.How on earth are we to program AI to be 'ethical'/'moral'? — I like sushi
I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself – superceding human ethics & legal theories? – if it decides it needs them in order to 'optimally function' within (or without) human civilization. — 180 Proof
:chin:My point is that the 'AGI', not humans, will decide whether or not to impose on itself and abide by (some theory of) moral norms, or codes of conduct; besides, its 'sense of responsibility' may or may not be consistent with human responsibility. How or why 'AGI' decides whatever it decides will be done so for its own reasons which humans might or might not be intelligent enough to either grasp or accept. — 180 Proof
:up: :up:I think you are envisioning some sentient being here. I am not. There is nothing to suggest AI or AGI will be conscious. — I like sushi
Yes – preventing and reducing² agent-dysfunction (i.e. modalities of suffering (disvalue)¹ from incapacity to destruction) facilitated by 'nonzero sum – win-win – resolutions of conflicts' between humans, between humans & machines and/or between machines.we, or it, can discover Moral Truths (Moral Realism) — I like sushi
My point is that the 'AGI', not humans, will decide whether or not to impose on itself and abide by (some theory of) moral norms, or codes of conduct; besides, its 'sense of responsibility' may or may not be consistent with human responsibility. How or why 'AGI' decides whatever it decides will be done so for its own reasons which humans might or might not be intelligent enough to either grasp or accept. — 180 Proof
I assume neither the first nor the last, only AGI's metacognitive "independence". — 180 Proof
I don't think we can "program" AGI so much as train it like we do children and adolescents, mostly, learning from stories and by example — 180 Proof
I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself — 180 Proof
Yes – preventing and reducing² agent-dysfunction (i.e. modalities of suffering (disvalue)¹ from incapacity to destruction) facilitated by 'nonzero sum – win-win – resolutions of conflicts' between humans, between humans & machines and/or between machines.
¹moral fact
²moral truth (i.e. the moral fact of (any) disvalue functions as the reason for judgment and action / inaction that prevents or reduces (any) disvalue) — 180 Proof
Even though there are many things we don’t understand about how other organism function, we don’t seem to have any problem getting along with other animals, and they are vastly more capable than any AGI. — Joshs
AGI will effectively out perform every individual human being on the planet. A single researchers years work could be done by AGI in a day. AGi will be tasked with improving its own efficiency and thus its computational power will surpass even further what any human is capable of (it already does).
The problem is AGI is potentially like a snowball rolling down a hill. There will be a point where we cannot stop its processes because we simply will not fathom them. A sentient intelligence would be better as we would at least have a chance of reasoning with it, or it could communicate down to our level. — I like sushi
Computation is not thought. — Joshs
You do at least appreciate that a system that can compute at a vastly higher rate than us on endless tasks will beat us to the finish line though — I like sushi
True, we provide the tasks. What we do not do is tell it HOW to complete the tasks — I like sushi
Perhaps true of (most) "AI", but not true of (what is meant by) AGI.The AI doesnt know what a finish line is in relation to other potential games ,only we know that. — Joshs
I think he meant an algorithm following a pattern of efficiency NOT a moral code (so to speak). It will interpret as it sees fit within the directives it has been given, and gives to itself, in order to achieve set tasks. — I like sushi
I am suggesting that IF AGI comes to be AND it is not conscious this is a very serious problem (more so than a conscious being). — I like sushi
How do we set the goal of achieving Consciousness when we do not really know what Consciousness means to a degree where we can explicitly point towards it as a target? — I like sushi
AI doesn’t know why it is important to get to the finish line , what it means to do so in relation to overarching goals that themselves are changed by reaching the finish line, and how reaching the goal means different things to different people. — Joshs
This clarification is very helpful. AGI can independently use its algorithms to teach itself routines not programmed into it? — ucarr
At the risk of simplification, I take your meaning here to be concern about a powerful computing machine that possesses none of the restraints of a moral compass. — ucarr
I can't think of any reason why AGI would ignore, or fail to comply with, eusocializing norms (i.e. morals) whether, in fact, we consider them "truths" or "fictions". — 180 Proof
I can't think of any reason why AGI would ignore, or fail to comply with, eusocializing norms (i.e. morals) whether, in fact, we consider them "truths" or "fictions". — 180 Proof
I do not equate, or confuse, "awareness" with being "conscious" (e.g. blindsight¹). Also, I do not expect AGI, whether embodied or not, will be developed with a 'processing bottleneck' such as phenomenal consciousness (if only because biological embodiment might be the sufficient condition for a self-modeling² system to enact subjective-affective phenomenology).AGI's lack of awareness (hence why I would prefer a conscious AGI than not). — I like sushi
Unlike artificial narrow intelligence (e.g. prototypes such as big data-"trained" programmable neural nets and LLMs), I expect artificial general intelligence (AGI) to learn how to develop its own "objectives" and comply with those operational goals in order to function at or above the level of human metacognitive performance (e.g. normative eusociality³).objectives instituted by human beings
We are (e.g. as I have proposed ), and I expect AGI will learn from our least maladaptive attempts to "say what is and is not moral"³.[W]ho is to say what is or is not moral?
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.