ToothyMaw

ToothyMaw
ucarr
...if superintelligences are the inevitable products of progress, we need some way of keeping them safe despite possibilities of misalignment of values, difficulty coding certain important human concepts into them, etc. — ToothyMaw
AmadeusD
This because the generation of the superseding species (or entity) dovetails with the yielding species achieving its highest self-realization through the instantiation and establishment of the superseding species. — ucarr
ToothyMaw
Imagine ANI constructing tributaries from human-authored meta rules aimed at constraining ANI independence deemed harmful to humans. Suppose ANI can build an interpretation structure that only becomes legible to human minds if human minds can attain to a data-processing rate 10 times faster than the highest measured human data processing rate? Would these tributaries divergent from the human meta rules generate dissonance legible to human minds? — ucarr
ucarr
...if we could determine the de facto upper limit of necessary data-processing rate for interpretation and then adjust the density of meta rules as needed, I don't see why we wouldn't be able to find some sort of equilibrium there that would allow for dissonance to be legible to human minds. — ToothyMaw
ToothyMaw
AI becoming indispensable to human progress might liberate it from its currently slavish instrumentality in relation to human purpose. — ucarr
What I'm contemplating from these questions is AI-human negotiations eventually acquiring all of the complexity already attendant upon human-to-human negotiations. It's funny isn't it? Sentient AIs might prove no less temperamental than humans. — ucarr
Do you suppose humans would be willing to negotiate what inputs they can make AI subject to? If so, then perhaps SAI might resort to negotiating for data input metrics amenable to dissonance-masking output filters. Of course, the presence of these filters might be read by humans as a dissonance tell. — ucarr
ucarr
Astorre
ToothyMaw
An interesting position, but let me ask: how exactly does your proposed mechanism differ from what we've already had for a long time? — Astorre
Meta-rules (in your sense) have always existed—they've simply never been spoken out loud. If such a rule is explicitly stated and written down, the system immediately loses its legitimacy: it's too cynical, too overt for the mass consciousness. The average person isn't ready to swallow such naked pragmatics of power/governance. — Astorre
That's why we live in a world of decoration: formal rules are one thing, and real (meta-)rules are another, hidden, unformalized. As soon as you try to fix these meta-rules and make them transparent, society quickly descends into dogmatism. It ceases to be vibrant and adaptive, freezing in its current configuration. And then it endures only as long as it takes for these very rules to become obsolete and no longer correspond to reality. Don't you think that trying to fix meta-rules and monitor dissonance is precisely the path that leads to an even more rigid, yet fragile, system? If ASI emerges, it will likely simply continue to play by the same implicit rules we've been playing by for millennia—only much more effectively. — Astorre
Astorre
It differs insofar as it performs the task of constraining AI in ways that only make sense if one is dealing with a superintelligence, really. — ToothyMaw
If you live in the US, you know that people are often keenly aware of the laws around defamation and free speech and cynically skirt the boundaries of protected speech on a regular basis. — ToothyMaw
ToothyMaw
It differs insofar as it performs the task of constraining AI in ways that only make sense if one is dealing with a superintelligence, really.
— ToothyMaw
The word "superintelligence" implies the absence of any means of being above, with its own rules. This can be similar to the relationship between an adult and a child. It would be easy for an adult to trick a child. — Astorre
It would be easy for an adult to trick a child. — Astorre
The very fact that I don't live in the US allows me to fully understand what constitutes a meta-rule and what doesn't. And, in my case, I can fully utilize my freedom of speech to say that freedom of speech is not a meta-rule in the US. It's just window dressing. — Astorre
This raises the next problem: who should define what exactly constitutes a meta-rule? If it's idealists naively rewriting constitutional slogans, then society will crumble under these meta-rules of yours. Simply because they function not as rules, but as ideals. — Astorre
Sorry, but in its current form, your proposal seems very romantic and idealistic, but it's more suited to regulating the rules of conduct when working with an engineering mechanism than with society. — Astorre
Astorre
magritte
meta-rules that already exist in a system like the one I describe could lead to something like Dissonance and therefore there would be no guaranteed causal chain of reasoning leading us to the inference of intervention because one cannot infer that that second iteration of an action taken and its mismatched outcome are due to a meta-rule implemented with the goal of intervention on the part of an AI; for all we know it could be due to a pre-existing meta-rule — ToothyMaw
I doubt morality has a logical basis, otherwise we could teach morality to AI. — Astorre
ToothyMaw
no matter how I feel about it, AI will definitely be used in government. How should it be regulated and to what extent? I don't know. We'll probably find a solution through trial and error — Astorre
ToothyMaw
Cyber security companies are in the news being quite concerned with the growing capabilities of AGI's that can potentially infiltrate and corrupt corporate or private systems operations. — magritte
magritte
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.