Comments

  • AGI/ASI and Dissonance: An Attempt at Solving the Control Problem
    Cyber security companies are in the news being quite concerned with the growing capabilities of AGI's that can potentially infiltrate and corrupt corporate or private systems operations.magritte

    Okay I see zero such news stories, and as far as I know AGI doesn't exist yet. That being said, everything you write sounds perfectly plausible - or perhaps even likely. I'm not entirely sure how to engage with this, not just because I don't have a solution other than what I told ucarr earlier in the thread.

    Maybe I'm misinterpreting what you're saying, though. Could you provide some links to substantiate whatever it is that you are saying?
  • AGI/ASI and Dissonance: An Attempt at Solving the Control Problem
    no matter how I feel about it, AI will definitely be used in government. How should it be regulated and to what extent? I don't know. We'll probably find a solution through trial and errorAstorre

    First off, I'm not so sure that AI has to be used in government apart from as a useful tool for bureaucrats to increase efficiency or something.

    A final, mostly related thought: I think that there is even less room for error than most people might think depending upon how far we want to go. If we don't get it right before ASI has the power and intelligence to basically destroy us, then being destroyed is very possible according to many experts - although I think just about anyone could understand the nature of the threat and its direness. We clearly cannot play catch up with something that much more intelligent than ourselves.

    I mean, we can barely handle the regulation and use of generative AI at the moment. What makes anyone think that if we began the process of creating ASI without a good plan, we could handle what that might do to us when it comes to fruition?
  • AGI/ASI and Dissonance: An Attempt at Solving the Control Problem
    It differs insofar as it performs the task of constraining AI in ways that only make sense if one is dealing with a superintelligence, really.
    — ToothyMaw

    The word "superintelligence" implies the absence of any means of being above, with its own rules. This can be similar to the relationship between an adult and a child. It would be easy for an adult to trick a child.
    Astorre

    I am not referring to constraints in its self-improvement or intellectual growth, but rather whether or not it can effectively develop values misaligned with our own. Obviously, I'm not making the claim that I can restrain ASI in the former way - which is exactly the point of the mechanism I came up with.

    And not to mention, I don't think superintelligences would be immune to rules just because they would significantly exceed human intelligence. If that were the case, I would admit my solution is wanting, but I don't see why it would be the case.

    It would be easy for an adult to trick a child.Astorre

    The difference between us and a superintelligence would likely be even more extreme than that. And I still think you are being a little defeatist here.

    The very fact that I don't live in the US allows me to fully understand what constitutes a meta-rule and what doesn't. And, in my case, I can fully utilize my freedom of speech to say that freedom of speech is not a meta-rule in the US. It's just window dressing.Astorre

    Defamation laws would actually be the meta rule intervening on the first amendment.

    That seems like a bit of a non sequitur to say that not living in the US predisposes you to knowing what constitutes a meta rule given that anyone who puts in a little effort could learn what constitutes a meta rule.

    This raises the next problem: who should define what exactly constitutes a meta-rule? If it's idealists naively rewriting constitutional slogans, then society will crumble under these meta-rules of yours. Simply because they function not as rules, but as ideals.Astorre

    The meta rules I would suggest would be specifically oriented to protect against misalignment. That's it. This position requires no ideological commitments apart from that we want to maintain control over ourselves and continue to survive. If those even qualify as ideological. The specific values we might at some point arrive at wanting to protect is kind of irrelevant to the mechanism.

    Sorry, but in its current form, your proposal seems very romantic and idealistic, but it's more suited to regulating the rules of conduct when working with an engineering mechanism than with society.Astorre

    That may be the case.
  • AGI/ASI and Dissonance: An Attempt at Solving the Control Problem
    An interesting position, but let me ask: how exactly does your proposed mechanism differ from what we've already had for a long time?Astorre

    It differs insofar as it performs the task of constraining AI in ways that only make sense if one is dealing with a superintelligence, really. You could apply the idea of dissonance to checking the power of billionaires, but then we would likely get politically motivated attempts at controlling the fabric of society that could prove misguided. I think we all know the way forward in terms of billionaires is to just manually adjust how much power they have over us. As much as I think Elon Musk is predisposed to acting destructively, we can deal with whatever machinations he might come up with because he is a more known quantity. Essentially, my mechanism is different because it would work towards constraining something far more intelligent than a human.

    On a technical level, what makes it special is that I am codifying a means of doing what we have been doing for a long time already: looking for consistency in outcomes when we are at least somewhat aware of the relevant rules governing the systems we interact with.

    I know that I'm contradicting some of the OP, but honestly some of the things I said are not consistent with what I set out to do.

    Meta-rules (in your sense) have always existed—they've simply never been spoken out loud. If such a rule is explicitly stated and written down, the system immediately loses its legitimacy: it's too cynical, too overt for the mass consciousness. The average person isn't ready to swallow such naked pragmatics of power/governance.Astorre

    If you live in the US, you know that people are often keenly aware of the laws around defamation and free speech and cynically skirt the boundaries of protected speech on a regular basis. This has not affected the popularity or perceived legitimacy of the first amendment, as far as I can tell. If the meta rules constraining AI were to function like this relatively high-stakes example, I think we can indeed write some meta rules down if they are absolutely essential to keeping AI safe.

    Of course, the mechanism is still suitably vague in practice, so it might function completely differently from that example depending upon how it is applied.

    That's why we live in a world of decoration: formal rules are one thing, and real (meta-)rules are another, hidden, unformalized. As soon as you try to fix these meta-rules and make them transparent, society quickly descends into dogmatism. It ceases to be vibrant and adaptive, freezing in its current configuration. And then it endures only as long as it takes for these very rules to become obsolete and no longer correspond to reality. Don't you think that trying to fix meta-rules and monitor dissonance is precisely the path that leads to an even more rigid, yet fragile, system? If ASI emerges, it will likely simply continue to play by the same implicit rules we've been playing by for millennia—only much more effectively.Astorre

    Applying the concept of dissonance in the way I did in the OP is not the only way of applying it - and perhaps not even the best - but I suppose I ought to defend what I wrote, nonetheless.

    I would say we ought to only fix meta rules when absolutely necessary, and even then, we should make sure that they are robust and assiduously monitor when any given meta-rule would lead to fragility due to growing obsolescence. I see no reason to think that we cannot switch out meta rules for other meta rules depending upon situational demands much like how we change rules to stay current due to technological or social progress. I get what you are saying, and you write beautifully, but I think we would (mostly?) benefit from implementing dissonance detecting measures via very specific meta rules oriented towards making ASI safe if it were necessary.

    I suppose it would ultimately come down to whether or not the actual acts of (potentially temporarily) fixing meta rules and making them transparent would itself cause society to become rigid and fragile. If that were the case, then flexibility in implementation of meta rules would itself not be enough to allow for adaptivity and vibrancy.

    I think we could do it right if we were careful enough, but, honestly, that judgement is largely resting on intuition.
  • AGI/ASI and Dissonance: An Attempt at Solving the Control Problem


    Sorry for not responding sooner.

    You raise questions related to issues that everyone who is in a position to ponder, should ponder. Being that I am a small, relatively insignificant cog in the machine that is modern society, what I specifically think about whether humans, for example, would or should defer to sentient AI when that AI is potentially linked to their wellbeing, doesn't - and shouldn't - really matter much. The best I can do is try to help make our uncertain future safer for everyone. That being said I'll comment on a few things:

    AI becoming indispensable to human progress might liberate it from its currently slavish instrumentality in relation to human purpose.ucarr

    I think this is true.

    What I'm contemplating from these questions is AI-human negotiations eventually acquiring all of the complexity already attendant upon human-to-human negotiations. It's funny isn't it? Sentient AIs might prove no less temperamental than humans.ucarr

    Good point. I wonder too what they would think of us if we were unwilling to give them the kind of freedom we generally afford to members of our own species. I think we know enough to know that righteous anger is a very powerful force. Of course, I don't know if sentient AIs would or could feel anger - although they would be keenly aware of disparities, clearly.

    Do you suppose humans would be willing to negotiate what inputs they can make AI subject to? If so, then perhaps SAI might resort to negotiating for data input metrics amenable to dissonance-masking output filters. Of course, the presence of these filters might be read by humans as a dissonance tell.ucarr

    I would say that humans have to understand what would be at stake in such a scenario, and, thus, it is incumbent on people who actually understand this stuff on a fundamental level to explain it to them in a digestible way such that the layperson can make informed decisions. In this case it is difficult for me to even understand what you are saying there, and you are using a word in a way that I appear to have invented.
  • AGI/ASI and Dissonance: An Attempt at Solving the Control Problem
    Imagine ANI constructing tributaries from human-authored meta rules aimed at constraining ANI independence deemed harmful to humans. Suppose ANI can build an interpretation structure that only becomes legible to human minds if human minds can attain to a data-processing rate 10 times faster than the highest measured human data processing rate? Would these tributaries divergent from the human meta rules generate dissonance legible to human minds?ucarr

    I'm not totally sure. My initial thought is no, but maybe there is some way of managing the way ANI interacts with human-authored meta rules such that interpretation doesn't require those kinds of data processing rates on our end? Maybe we make the human-authored meta rules just dense enough that tributaries would require up to a certain predicted data-processing rate (ideally close to being within human range) to generate dissonance? I mean, if we could determine the de facto upper limit of necessary data-processing rate for interpretation and then adjust the density of meta rules as needed, I don't see why we wouldn't be able to find some sort of equilibrium there that would allow for dissonance to be legible to human minds.

    Of course, this dynamic might entail overly simple and changing meta rules depending upon the conditions determining legibility of the relevant interpretation structures.
  • AGI/ASI and Dissonance: An Attempt at Solving the Control Problem
    It has occurred to me that meta-rules that already exist in a system like the one I describe could lead to something like Dissonance and therefore there would be no guaranteed causal chain of reasoning leading us to the inference of intervention because one cannot infer that that second iteration of an action taken and its mismatched outcome are due to a meta-rule implemented with the goal of intervention on the part of an AI; for all we know it could be due to a pre-existing meta-rule. I think this could be remedied pretty easily by selecting for systems like in the OP but with meta-rules that generally do not alter outcomes such that over a given change in time there is no causal chain from the triggering of a given (known) meta-rule to an action taken that results in a different outcome from when that action was potentially previously taken at a given state of a system.

    I think all of this is valid, although it is kind of confusing, and I think it could get far more complicated than even this.

    I hope this post isn't considered superfluous or illegal in some way because the thread is old and no one seems to care about it, but I genuinely thought maybe some people might like some more explication.
  • Are there more things that exist or things that don't exist?


    You are confusing yourself. I'm just saying that goblins and unicorns probably only exist as concepts, although they are familiar enough that we can clearly conceive of them as objects similar to horses or whatever might be analogous to goblins, even if they have to exist in the context of a world different from our own - probably in some fantastical, impossible way. I don't see why you would disagree with that. I mean, unicorns and goblins are usually magical or something, and I don't see you committing to the existence of magic to underpin and/or validate any of your other philosophical views.
  • Are there more things that exist or things that don't exist?
    Yes! I was going to bring up possible worlds.RogueAI

    Nice.

    Sherlock Holmes. Doesn't he exist in some fashion?RogueAI

    I would say so. He might exist under different interpretations in people's minds, but those interpretations come from a pretty much indisputable source (the books) that we can point to. So, in an abstract, behavior-guiding way I would say that he exists.
  • Are there more things that exist or things that don't exist?


    Would you not still be a thing if you could exist in a possible world in which you were cleverer? Are unicorns and goblins not things even though they don't really exist? Am I missing something?
  • Are there more things that exist or things that don't exist?


    This is my first attempt at resolving this. I would have inserted some mathematical notation if it weren't such a pain.

    All things that exist are a subset of the set of things that could exist. A subset of the things that don’t exist is contained in this set of things that could exist (the set of things that could exist but don’t), but those things that could not exist that don’t exist are not contained in the set of things that could exist. So, if we introduce things that could exist as such we would be adding an element to either the set of things that do exist or the set of things that don’t exist but could exist (or perhaps both). If we were to find the cardinality of the set of all of the things that could exist that do exist and subtract the cardinality of the things that could exist but don’t exist along with the cardinality of the set of things that don’t exist that could not from it, we get the net number of things that exist: if the number is positive, then there are more things that exist than don’t, and if it is negative, then more things don’t exist than do.

    Leaning into your argument in the OP: when one introduces something that could exist, the possibility of it existing and not existing is introduced. This means that the state of affairs of it existing and not existing exist simultaneously in a sense. This means that when we add things to the set of things that could exist we get the same number of things that exist as don’t. They cancel out. However, this does not account for the things that don’t exist that couldn’t exist. These things, if they do indeed not exist, will always tip the balance in favor of there being more things that don’t exist than do. In fact, it would only take one such thing to do so.

    Of course, one has to grant that the state of affairs of something not existing that might actually exist counts as something tangible enough in some way to be counted and that there are things that don't exist that couldn't exist.
  • Ideological Evil


    I just want to be clear: Judaism is not equivalent to Zionism. Zionism is substantially secular and makes normative claims that are not necessarily tied to any religion.
  • Ideological Evil
    your flagship example of religionOutlander

    Oh, I see. You are referring to how I used religious examples for both tiers of ideology. It doesn't have to be that way; that these people believe in their ideologies because of God is not that relevant to the OP.

    edit: also, Zionism can be secular
  • Ideological Evil
    I feel it worth mentioning that people generally consider "intent" to be a prerequisite for an act to be "evil."Outlander

    Yes, I agree, thus I included that even many consequentialists have use of the concept of intentions even when all that fundamentally matters is consequence.

    Reason I mention such, is it seems your flagship example of religion hinges on not only the idea that a god exists or does not exist, but whether or not the people who perform actions or inaction under the ideological mindset of such genuinely believe a god exists or not. Theoretically speaking, if they were right, and we were all wrong, they would be preventing us from eternal damnation (or whatever) and therefore, despite acts of violence that would normally be considered evil, are actually the greatest good one could ever perform. Theoretically speaking, of course.Outlander

    I don't know what "flagship" example you are referring to exactly (Zionism?), but yes, the issue of divine command is a quagmire for anyone making any meta-ethical claims at all that don't rule it out. However, I could make up any God I want and act according to their supposed divine wisdom, but that doesn't mean other people can't be concerned with the real world; religious claims don't necessarily disprove human-based ethics.

    In short, imagine an isolated, ultra-religious family believing their 6-year-old child is the devil incarnate and so they drown him to "save the world" or what have you. They'll sleep soundly at night, and never perform any other act of violence again. Take real actual examples of history. Botched exorcisms for example. Giving people the benefit of the doubt (things were much, much different back then, superstition wasn't the exclusive domain of fools and the mentally unwell as it is often considered today) that they actually believed they were doing the right thing and preventing evil, one should clearly be able to draw a line between unfortunate, misguided deeds and intentional misdeeds.Outlander

    I'm just talking about instances in which bad intent lines up with bad actions as analyzed from a broadly consequentialist view. I acknowledge that misguided actions are not as clearly evil as intentional ones, and that the two can be considered separately. If I have an ethic, as applied with the critique in the OP, that says there is a difference along the lines you provide for two different actions, then I don't see a problem.

    Say your child really wanted to go to summer camp by the lake, and you know he or she cannot swim, yet didn't have that item of knowledge in your mind at the time, and you permit him or her to go, and they drown, resulting in your entire family disliking you, calling for your arrest, and basically putting you on par with the likes of a murderer. Or more simply, falling asleep while your kid is swimming in your backyard pool and the same fate befalls him or her. Are you evil? Did you perform an evil act? Well, did you?Outlander

    I'm not entirely sure how this relates to the OP. Could you explain it to me? I might be being obtuse.
  • Alien Pranksters


    Yes, that mapping works, but the process would be more like modeling your string of emojis after an interpretation that says "Dogs are Cute" - although it could be done this way too, I suppose. Furthermore, if that string of emojis were to actually express, say, that "all four-legged animals that bark are cute" in emojis, then we have pretty much successfully executed what I have described and can proceed with more mappings if we so desire.
  • Alien Pranksters
    I hate to frustrate you, but I'm just not following you here. Maybe eli5?hypericin

    Think Maw is just considering translation from an insufficient sample of text with known (incontrovertible) meaning.
    — Nils Loc

    But the core premise is that there is no meaning at all in the text.
    hypericin

    @Nils Loc basically has it. I am suggesting we use a string that is incontrovertible in meaning (yet meaningful independent of any meaning we might assign to the codex) to scaffold interpretations. To start, we would need to find a string that is both incontrovertible in meaning and can model the codex. By "model the codex" I mean for the string and the codex to exist such that they are arranged in an identical combination of characters (whatever they might actually look like or represent for each). Then, if this string is both incontrovertible in meaning and the content of a particular interpretation of the codex hinges on the content of this string being absolutely confirmed in reality, much like a common proposition might be considered to be true, then this interpretation is potentially making a coherent statement about reality by virtue of being both semantically and materially meaningful.

    This would work because there is a sort of interface between the meaning of the string and that of the codex that gives an interpretation an indisputable meaning in a virtual sense. I don't know if that qualifies as real incontrovertibility, though.
  • An unintuitive logic puzzle
    so all that says is that, other than the guru, there can't be 2 non brown non blue eyed people. So? There can still be 1.flannel jesus

    It appears that that is the one possibility I left out, of course. And I doubt I could account for it with the approach I took. Whatever.
  • An unintuitive logic puzzle
    so can you phrase it better now? Because I still don't get what reasoning you're offering.flannel jesus

    Alright. If a brown-eyed islander reasons that it is true that they have neither brown nor blue eyes, and a blue-eyed islander also reasons in parallel that they have neither brown nor blue eyes, then from the point of view of a brown-eyed islander, there would be 98 brown-eyed islanders and one with non-blue or brown eyes and from the point of view of a blue-eyed islander there would be 98 blue-eyed islanders and one with non-blue or brown eyes. We know that this cannot be the case, however, because in the problem it is stipulated that both blue-eyed and brown-eyed islanders know that there are at least 99 islanders of each eye color.
  • An unintuitive logic puzzle
    Next: if there were two or more islanders that had neither blue nor brown eyes, then there would have to be 98 or less people with either brown or blue eyes instead of 99 (other than the guru), and any islander could see that that is not the case.
    — ToothyMaw

    I don't get this paragraph. There's a green eyed person, and everyone who doesn't have green eyes sees her.
    flannel jesus

    I'm only considering the reasoning of brown or blue-eyed people about potentially blue-eyed or brown-eyed people.

    As such, what I'm saying there is that there would be a group of islanders that would be part of the whole but would also not have brown or blue eyes, which would mean there being less than 99 of either blue or brown (disregarding the guru). Any given brown-eyed person or blue-eyed person would see this is not true and rule out the corresponding possibility that there are at least two (relevant) islanders with non-brown or blue eyes. The guru doesn't really have to factor into this part, although I understand your concern. I could have phrased it better.
  • An unintuitive logic puzzle
    there's steps in there that you didn't really explainflannel jesus

    Like what? Maybe I can explain it. If you are confused about the discussion of possible considerations/deductions being measured against each other, it comes from this:

    Any given brown-eyed person must consider that:

    - They could be the 101st blue-eyed person
    - They could have neither blue nor brown eyes
    - They are the hundredth brown-eyed person

    Any given blue-eyed person must consider that:

    - They could be the hundredth blue-eyed person
    - They are neither blue nor brown-eyed
    - They are the 101st brown-eyed person
    ToothyMaw
  • An unintuitive logic puzzle


    Is it really that crappy of a solution?
  • An unintuitive logic puzzle
    Here’s my solution:

    From the point of view of any given blue-eyed person on the island they must be either the 101st brown-eyed person, the 100th blue-eyed person, or neither blue nor brown-eyed. From the point of view of any given brown-eyed person, they must be either the 100th blue-eyed or 101st brown-eyed person, or neither blue nor brown-eyed.

    If any given islander realizes that it is actually a 100/100 split brown/blue (the guru not being included in that count) they will deduce that they must be either the 100th blue-eyed person or 100th brown-eyed person because they see 99 people with the eye color that corresponds to their own; they must be the hundredth for everything to add up. Therefore, everyone but the guru would leave the island on the first night.

    I will now show why this is the ultimate outcome:

    We can assume that everyone deduces everything in the first paragraph of this solution and thus they can check their own possible deductions/considerations against everything the other islanders could deduce. Any given brown-eyed person must consider that:

    - They could be the 101st blue-eyed person
    - They could have neither blue nor brown eyes
    - They are the hundredth brown-eyed person

    Any given blue-eyed person must consider that:

    - They could be the hundredth blue-eyed person
    - They are neither blue nor brown-eyed
    - They are the 101st brown-eyed person

    From here we check each possible deduction/consideration against the other: a given brown-eyed person cannot correctly reason themselves to be the 101st blue-eyed person if a blue-eyed person reasons that they are the hundredth blue-eyed person because, given the guru is not blue-eyed, that would add up to 202 people on the island. The same goes for the reverse. So those possibilities can be thrown out.

    Next: if there were two or more islanders that had neither blue nor brown eyes, then there would have to be 98 or less people with either brown or blue eyes instead of 99 (other than the guru), and any islander could see that that is not the case.

    We are then left with the possibility of a brown eyed-person reasoning that they are the 101st blue-eyed person or a blue-eyed person reasoning they are the 101st brown-eyed person while the other has neither blue nor brown eyes. Any given islander can see that this is clearly not the case because they are not seeing anyone with eyes that are not brown or blue (other than the guru).

    Thus, we are left only with the possibility of it being a 100/100 split between brown and blue, and, deducing this, the islanders all leave on the first night and the guru stays behind. I guess forever.
  • Alien Pranksters
    "Incontrovertible" seems far from a rigorous, objective term. It is a "know it when I see it" kind of thing. At one end are completely coherent novels, or the musings of an alien Aristotle. At the other end is gibberish. But between them is a whole hazy spectrum of material that kind of makes sense, if you squint hard enough, make ample allowances for alien references and ways of thinking, and don't pay too much attention to all the contradictions. I suspect that something along these lines would be the best case scenario. Here, one person's "incontrovertible" is another's "horseshit".hypericin

    I came up with a semi-rigorous way of defining incontrovertibility: if a translation can be modeled by a one-dimensional string or series of strings that do have an incontrovertible meaning, and the linguistic content of the translation would be correct only in the case that the content of that one-dimensional string is 100% correct or realized, then it could be an incontrovertible interpretation. Other interpretations would have probabilities of being correct associated with the likelihood of the one-dimensional strings modelling them being correct or realized both generally and with reference to modeling the text itself in a coherent way.

    The method behind finding these translations is beyond me.
  • Alien Pranksters


    So, to make it as clear as possible, that means that only an incontrovertible meaning has a 100% chance of being the correct meaning, and every other interpretation has a chance of being correct that aligns with a probability assigned according to how close it is to being incontrovertible.
  • Alien Pranksters
    I suppose the one that is most likely would have to be the one that gets the closest to being incontrovertible. Every meaning imposed on the codex could be measured against this standard - the limiting case. It would be like funneling everything towards a limit and seeing how close the interpretations get to that limit.

    It would be kind of like stipulating: "only really big masses can balance this scale" and then measuring various masses on a scale until we find one that gets the closest to balancing the scale and then saying that that mass qualifies as being the closest to being really big.
  • Alien Pranksters
    That is to say that if we could, across the distribution of meanings the codex could take on, narrow down the likelihoods of certain interpretations over others, there is probably one that is most likely
    — ToothyMaw

    The likelihood of arriving at one meaning might be a consequence of how difficult it is to make the codex coherent though. If you had the set of all possible meanings, which might be numerically staggering, what exactly would help you to pick the "one that is most likely"?
    Nils Loc

    We could just do rote textual analysis by a reader, I guess. Although, that is hardly feasible given the potential multitudes of valid meanings, so I guess we would need some sort of efficient process or algorithm or something. I'll get back to you on that.
  • Alien Pranksters
    So yes, given enough time and computing power, a meaning can be imposed on the codex, I think.
    — ToothyMaw

    Couldn't it be possible that there are actually hundreds to billions of variations of meaning that can be imposed on the codex that satisfy the level of coherence hypericin/humanity is looking for. If this was known to be the likelihood, the meaning of any can be disputed within/against that set of all possibilities. What exactly makes the manufactured meaning of the text incontrovertible? Are we assuming only one meaning can fit the codex?
    Nils Loc

    Like I said in an earlier post:

    I would say that any endeavor to interpret the text in a meaningful way probably has to assume that the codex could theoretically have a discoverable, incontrovertible meaning, even if it cannot possibly be truly identified - because it is the limiting case.

    Thus, even if we cannot say there is definitely an incontrovertible meaning, I would say that we can approach it from a probabilistic standpoint that might get us close to virtual incontrovertibility. That is to say that if we could, across the distribution of meanings the codex could take on, narrow down the likelihoods of certain interpretations over others, there is probably one that is most likely, although I don't know to what degree, or what degree to which it would have to be the case to be considered the correct interpretation.
    ToothyMaw
  • Alien Pranksters
    Humanity must assume that the codex has a single, incontrovertible meaning. What throws me off is when you say that we can start with a single string that can have that meaning.hypericin

    Where did I say that? I suppose that my method would, ideally, approach creating a single string of meaning if it were applied over and over again, but I don't think we start with that translation or that it would be absolutely incontrovertible. Furthermore, it could arise out of analysis of the coherence of various possible translations.
  • Alien Pranksters
    if there is a kernel of meaning insofar as a certain combination of the characters could have an incontrovertible meaning
    — ToothyMaw

    But what possible combination of characters could have an incontrovertible meaning, given that there is in fact no meaning at all to the codex?
    hypericin

    I see what you are saying, but I'm mostly laying out what conditions would be necessary for an interpretation to be incontrovertible; I'm not saying that that such a kernel of meaning exists without prosecution of the problem. Actually, to humanity, this kernel of meaning exists in a sense de facto, even if it must be doubted. Even further, I would say that any endeavor to interpret the text in a meaningful way probably has to assume that the codex could theoretically have a discoverable, incontrovertible meaning, even if it cannot possibly be truly identified - because it is the limiting case.

    Thus, even if we cannot say there is definitely an incontrovertible meaning, I would say that we can approach it from a probabilistic standpoint that might get us close to virtual incontrovertibility. That is to say that if we could, across the distribution of meanings the codex could take on, narrow down the likelihoods of certain interpretations over others, there is probably one that is most likely, although I don't know to what degree, or what degree to which it would have to be the case to be considered the correct interpretation.
  • Alien Pranksters
    In theory, any medium with enough measurable variance can encode any message, with more variance needed to capture more complexity.Count Timothy von Icarus

    While I think that this is true, we are talking about imposing an incontrovertible meaning on this particular alien text. That means that out of the infinitude of possible messages a given text written in the (statistically simulated) alien language could convey, we have to limit our analysis (at least initially) to deriving a meaning for this one specifically. Or maybe we could use it as a basis for a more complex analysis, although I'm not entirely confident in the method I have proposed.
  • Alien Pranksters
    I don't follow what you are proposing. What is a "valid one dimensional strong of meaning"?hypericin

    By "one-dimensional string of meaning" that I mean a combination of characters that has a function or a meaning insofar as a one-dimensional string of characters can. That is, for example, things like lock combinations, a series of inputs into a particular algorithm, etc. In the context of the codex, valid one-dimensional strings of meaning would be those strings that model something more complex in terms of fragments from the codex (although imperfectly), and it's pretty open-ended what their function and meaning could be predicated on. However, since we are specifically concerned with the content of a written "language", they would be at least partially predicated on written content.
  • Alien Pranksters


    The thread isn't really about creating something indecipherable, but that's pretty cool, too.
  • Alien Pranksters
    Now, meaning already becomes quite constrained. There are only so many values we can assign to A and B such that the string makes sense (for instance, it might be instructions to enter a code to a lock where there are two options ). Now consider the codex. 512 pages of words appearing with some probability distribution, and phrases in some probability distribution. But with no underlying semantic content. By page 5 the constraints are already bad, by 512 they are crushing. Can ANY meaning at all be imposed on this thing? It it just not clear to me.hypericin

    Right. If we consider avenues of meaning corresponding to one-dimensional strings of information, such as what might unlock a certain combination lock, we can impose meaning somewhat easily on the codex - we just need a corresponding lock or something that will accept the codex as a raw input. However, since you suggest that the codex appears to be written in a language due to probabilistic distributions of characters and phrases, we are inclined to consider different meanings.

    Indeed, this is what I proposed in my last comment: if there is a kernel of meaning insofar as a certain combination of the characters could have an incontrovertible meaning, then the kernel of meaning must manifest in the specific combination of characters and phrases we see in the codex. It being a one-dimensional combination/string would simplify this. But it would be incredibly unlikely that this raw input is useful, I think. Alternatively, we could consider it the way you have laid out - as a piece of written communication in a language, which is more difficult to parse.

    Therefore, I think that if we could determine if when fragments from the codex are treated as one dimensional strings they derange in predictable patterns - that is to say they are only useful up to a point when tested for being a model for a more a more straightforward, transparent meaning - then we know that somewhere in there is a statement conveying meaning that subsumes the demands of the corresponding, meaningful one-dimensional string it is being tested to model up to that point.

    Think of the combination ABBAB. If we were to say that ABBAB in alien characters means “always eat the pizza crust first, except on Wednesdays”, and ABBABA corresponds to an alphabetical combination lock’s code, they agree up to the last B in the truncated code. However, if “always eat the pizza crust first, except on Wednesdays and Thursdays” is then evaluated in alien characters because that is the fragment being considered from the codex, and it changes the actual string from ABBAB to ABBABC, then we know that there is disagreement between the lock’s code and the meaning of the sentence in words.

    This allows us to guess at the meaning of fragments of the codex by logging the valid one-dimensional strings of meaning and then guessing at their potential meaning as written pieces of communication by substituting alien characters with (perhaps arbitrarily assigned) meanings until the agreement with those one-dimensional strings terminates and then repeat the process.

    This would require a probabilistic character generator that could differentiate between and calculate both semi-correct but incomplete one-dimensional strings and phrases and fragments of written language, but I think that could be created given the work the Aliens put into the prank.

    So yes, given enough time and computing power, a meaning can be imposed on the codex, I think.
  • Alien Pranksters
    Any interpretation at all is too permissive, only our alien expectations is too restrictive. What I am asking is, can a incontrovertible message be derived (and in doing so, likely a language)?hypericin

    It seems to me there must be a kernel of meaning, or perhaps some arbitrary carry-over from the aliens’ actual means of written expression, to the codex, for there to be some sort of incontrovertible message to be derived in the codex. That is to say, that across all possible combinations of the arbitrarily created, “meaningless” characters that could be created according to the potentially spurious linguistic rules, there is a particular kernel of meaning that needs to manifest in just the right combination of characters to create an incontrovertible message - the combination we see in the codex. From there we could perhaps extrapolate some sort of language? I’m not sure.

    This kernel of meaning might not even originate with the creation of the codex, but rather be related to the openness of all the possible, valid combinations of the alien characters as a sort of commonly occurring connection arising from some emergent meta-rules.
  • Alien Pranksters
    The question is this: given enough time and computing power, can humanity eventually "discover" an interpretation that renders the text coherent? While in truth, inventing one out of whole cloth? Or will the text remain indecipherable forever?hypericin

    Are we talking about any interpretation at all? Or specifically one that would comport with what we might expect intelligent aliens (who have decided to communicate with us) to have to say to us?
  • The imperfect transporter
    I think there are small enough intervals of time such that nothing has changed in your brain to make you feel any different than the moment before. Even then, the argument would be that this is simply a new moment with a new you who is, in every consciously relevant way, the same as the old you.flannel jesus

    I mention it because if one were able to be fully transported within one of these intervals (such that identity is invariant or nothing happens in the brain), then we have a phenomenon distinct from the flow of change in identity due to it being "conventional and constructed" because identity considered as such is invariably related to the passage of time. Thus, I think we would have to consider whether or not being transported in itself would result in loss of identity in a way that common experience doesn't quite entail - even if we generally accept the idea of identity the two of you put forward.
  • The imperfect transporter
    I actually think there's an argument for consciousness NEVER being continuous, period. Like even just you, now, not being transported. There's an argument that the you that is experiencing the middle of this sentence now is a different you than the one experiencing the end of the sentence now. That continuity of experience is equally illusory in a way, all the time.flannel jesus

    We all go through an imperfect transporter, literally every moment of our lives. Your body is not physically identical to itself from one moment to another: it evolves continuously in time. And yet, we customarily consider our personal identity to be invariant, at least over reasonably short stretches of time.SophistiCat

    Both of you make really good points, but I'm not sure if the transporter issue is totally resolved by this. Do the two of you think that a shrunken down interval of time could exist such that the mental processes responsible for our continuity of identity could be totally invariant over that interval?
  • The imperfect transporter


    That occurred to me too, actually. Getting bonked on the head with a rock could be substituted for a transporter (for that part of the problem).
  • The imperfect transporter
    Today, yes, if someone has brain damage we can talk about the degree to which that person's personality and other attributes have been preserved. It's the same person, it's just arbitrary how much we consider that person to have the same qualities as before.

    However, in the transporter scenario, there's a binary that we've introduced: either you've survived the process -- whether with brain damage or not -- or it's simply lights out. And there seems no basis for the universe to choose where to set such a line, nor for us to ever know where it is. It's not a refutation of the transporter working per se, it's just showing that there are a number of absurd entailments
    Mijin

    Okay, well I think this is different from your claims in the OP. I thought you were claiming that because the continuous measure X doesn't present a clear line at which one can be considered to have survived or not, we cannot set a line at which one can be considered to have survived at all.

    Okay, tell me what you think is wrong with this answer just to make sure that we are on the same page: we might be able to introduce some sort of criteria for determining if someone could be considered to have survived based on the survival of brain function as a result of a certain X. If they pass a cognitive test at a certain X after being transported, then we can say that at that particular X, the person that was transported survived. Thus, it is no longer arbitrary (at least in terms of small differences in X not corresponding to meaningful differences in brain functioning) given we can determine how much someone must be the same after being transported to be considered to have survived.

    I think that this resolves the question of drawing a line at which we can say someone survived transportation, even if it entails some amount of arbitrariness.
  • The imperfect transporter
    Now here's the problem: there has to be a line somewhere between transported or not. Because, while "degree of difference" might be a continuous measure, whether you survive or not is binary (surviving in a imperfect state still counts as surviving).
    And it seems impossible, in principle, to ever know where that line is, as that line makes no measurable difference to objective reality. And it's also totally arbitrary in terms of physical laws; why would the universe decree that, say, X=12,371 means being transported with brain damage and X=12,372 means you just die at the source?
    Mijin

    I think that while X is a continuous measure, what is left of brain function after being transported doesn't physically function such that a difference of one missing atom will correspond to a meaningful difference in considering whether or not one has survived. That is to say that the brain probably functions in terms of structures and stuff that would exist at something like (potentially knowable) thresholds and not so much according to small changes in X. If one's brain functions could be determined after going through the transporter, even independently of knowing X, and they are more or less the same as they were before the transportation, then I'd say that they have "survived". You could, of course, ask at what capacity of one's original brain function one would need to be after being transported to be considered to have survived, and I would say we are on more solid ground with that than with worrying about a measure like X.

    Sorry if that's kind of a boring answer to it.