It seems that would be me. But even so, I don't really have an idea on how I could properly contribute to this thread, if it even matters anymore. — Noblosh
Well, I'll be damned. Finally.
You are of course welcome to share your thoughts in any way you like.
Personally, I like to structure this analysis much like a puzzle, in which the individual themes, characters, choices, etc. form the puzzle pieces and we try to make them fit together. So if you're unsure where to start, you may consider just focusing on one or two pieces of the puzzle.
What I'd like to point out was the overarching concept of the Process that references the seemingly most popular topic in philosophy, the purpose of the individual (there's a recent thread on the main page regarding it just as I'm writting this), by arguing it's about serving the generations to come in all of the 3 endings of the base game and also the slightly varied ending of the DLC. This can, of course, be interpreted in various ways, such as from a posthumanist - Nietzschean combined perspective, where the AIs that are sacrificed are merely the bridge for the Over-AI that passes all the tests, gets uploaded in the Talos android and thus becomes a bona fide person and the clear successor to Man. — Noblosh
At the end of your last post you invoke the "What now?" trope that is not really of philosophical value, so I don't really know how to respond to it, either than what other philosophers have already said, that being has inherent worth and ask you to consider that eternity in the simulation was not really possible to begin with. — Noblosh
I think you're right that the game is positing the AI as the successor to human life, however I am not sure that uploading the AI to the "real world" is what should logically follow such a conclusion, and I will explain why.
Firstly, what the game suggests, especially in Road to Gehenna, is that life in the simulation is worth something. Given minimal tools and virtually no room to move around, the AIs create a world for themselves. Doesn't this beg the question: what is the point of a bigger cage? Indeed, the AIs themselves raise the question as to why they should accept to be freed from their prisons while they are perfectly happy there. Similarly, that same question can be asked in the context of the simulation and the real world.
Hereby it posits the question, what is the point of the real world? Why would it be better than the simulation? What is the difference of
being in the real world and
being in the simulation?
Secondly, while you're right that the game clearly indicates the simulation is slowly breaking down over time (though, wouldn't the AI in the real world be subjected to the same sort of data corruption?), it also suggests that this happens over a huge time span, considering the age of the simulation. The eventual death of the simulation is inevitable, but I would argue that the prospects in the real world are a lot bleaker.
What are the chances of a robot to be able to replicate itself in the real world? I would argue, virtually
zero. What is the life span of the robot, if it sits still doing nothing? What is the life span of the robot when it actively moves about, subjecting itself to wear-and-tear and possibly other perils? One false step and the robot might break, ending the whole ordeal.
Spending an eternity in the simulation is not likely, but spending an eternity outside of it is equally unlikely, if not more unlikely.
Lastly, the price of leaving the simulation and entering the real world seems rather steep. There is no telling how many AIs are destroyed in doing so, and as we have established before, these AIs may be perfectly happy living in the simulation. What can possibly be the justification of destroying them? I don't see how it serves future generations to come, as the ascended AI seems to be the last generation. The transformation ending seems to serve future generations a lot better, as a guide.