• Michael
    15.7k
    If you can't say anything to bridge this explanatory gap then you can't claim anything "in principle" here.apokrisis

    Says the man who keeps saying that it's impossible in principle for a machine to be conscious?

    All I'm saying is that the brain isn't a miracle. Consciousness happens because of ordinary (even if complex) physical processes. If these processes can happen naturally then a sufficiently advanced civilization should be able to make them happen artificially. Unless consciousness really is magic and only ordinary human reproduction can bring it about?
  • apokrisis
    7.3k
    I suppose that if you were only simulating one mind, you could make your simulation domain smaller than if you were, say, simulating the entire population of the earth.SophistiCat

    I see the problem as being not just a difference in scale but one of kind. If you only had to simulate a single mind, then you don't even need a world for it. Essentially you are talking solipsistic idealism. A Boltzmann brain becomes your most plausible physicalist scenario.

    But how does it work if we have to simulate a whole collection of minds sharing the one world? Essentially we are recapitulating Cartesian substance dualism, just now with an informational rather than a physicalist basis.

    It should be telling that the Simulation Hypothesis so quickly takes us into the familiar thickets of the worst of sophomoric philosophy discussions.
  • apokrisis
    7.3k
    Says the man who keeps saying that it's impossible in principle for a machine to be conscious?Michael

    What I keep pointing out is the in principle difference that biology depends on material instability while computation depends on material stability. So yes, I fill in the gaps of my arguments.

    See .... https://thephilosophyforum.com/discussion/comment/68661

    I've been talking about using biological material rather than inorganic matter so the above is irrelevant.Michael

    It can't be irrelevant if you want to jump from what computers do - hex code to throw a switch - to what biology might do.

    If you want to instead talk about "biological material", then please do so. Just don't claim biology is merely machinery without proper support. And especially not after I have spelt out the "in principle" difference between machines and organisms.
  • SophistiCat
    2.2k
    I see the problem as being not just a difference in scale but one of kind. If you only had to simulate a single mind, then you don't even need a world for it. Essentially you are talking solipsistic idealism. A Boltzmann brain becomes your most plausible physicalist scenario.apokrisis

    Well, yes, you do need a world even for a single mind - assuming you are simulating the mind of a human being, rather than a Boltzmann brain, which starts in an arbitrary state and exists for only a fraction of a second. Solipsism is notoriously unfalsifiable, which means that there isn't a functional difference between the world that only exists in one mind and the "real" world. But if you are only concerned about one mind, then you can maybe bracket off/coarse-grain some of the world that you would otherwise have to simulate. Of course, that is assuming that your simulation allows for coarse-graining.
  • apokrisis
    7.3k
    But if you are only concerned about one mind, then you can maybe bracket off/coarse-grain some of the world that you would otherwise have to simulate.SophistiCat

    Sure. Just simulating one mind in its own solipsistic world of experience is the easy thing to imagine. I was asking about the architecture of a simulation in which many minds are sharing a world. How could that work?

    And also the Simulation Hypothesis generally asks us to believe the simplest compatible story. So once we start going down the solipsistic route, then a Boltzmann brain is the logical outcome. Why would you have to simulate an actual ongoing reality for this poor critter when you could just as easily fake every memory and just have it exist frozen in one split instant of "awareness"?

    Remember Musk's particular scenario. We are in a simulation that spontaneously arises from some kind of "boring" computational multiverse substrate. So simulating one frozen moment is infinitely more probable than simulating a whole lifetime of consciousness.

    I'm just pointing out that half-baked philosophy ain't good enough here. If we are going to get crazy, we have to go the whole hog.
  • apokrisis
    7.3k
    Consciousness happens because of ordinary (even if complex) physical processes. If these processes can happen naturally then a sufficiently advanced civilization should be able to make them happen artificially.Michael

    Sure. Nature produced bacteria, bunny rabbits, the human brain. This stuff just developed and evolved without much fuss at all.

    Therefore - in principle - it is not impossible that wait long enough, let biology do its thing, and the Spinning Jenny, the Ford Model T, the Apple iPhone will also just materialise out of the primordial ooze.

    It's a rational extrapolation. Sufficiently severe evolutionary pressure should result in every possible instance of a machine. It's just good old physics in action. Nothing to prevent it happening.
  • ssu
    8.7k
    Do we? How?SophistiCat
    Are you serious? Well, to give an easy example: if you would model reality with just Newtonian physics, your GPS-system wouldn't be so accurate as the present GPS system we now have, that takes into account relativity. And there's a multitude of other example where the idea of reality being this clock-work mechanical system doesn't add up.

    If you believe that conscious beings are outside any general order of things, then obviously you will reject the simulation conjecture for that reason alone. So there is nothing to talk about.SophistiCat
    That has to be the strawman argument of the month. Where did I say "conscious beings are outside any general order of things"?

    Definitions do matter. If we talk about Computers, then the definition of how they work, that they follow algorithms, matters too. Apokrisis explains this very well on the previous page:

    Computation is nothing more than rule-based pattern making. Relays of switches clicking off and on. And the switches themselves don't care whether they are turned on or off. The physics is all the same. As long as no one trips over the power cord, the machine will blindly make its patterns. What the software is programmed to do with the inputs it gets fed will - by design - have no impact on the life the hardware lives.

    Now from there, you can start to build biologically-inspired machines - like neural networks - that have some active relation with the world. There can be consequences and so the machine is starting to be like an organism.

    But the point is, the relationship is superficial, not fundamental. At a basic level, this artificial "organism" is still - in principle - founded on material stability and not material instability. You can't just wave your hands, extrapolate, and say the difference doesn't count.
    apokrisis
  • SophistiCat
    2.2k
    And also the Simulation Hypothesis generally asks us to believe the simplest compatible story. So once we start going down the solipsistic route, then a Boltzmann brain is the logical outcome. Why would you have to simulate an actual ongoing reality for this poor critter when you could just as easily fake every memory and just have it exist frozen in one split instant of "awareness"?

    Remember Musk's particular scenario. We are in a simulation that spontaneously arises from some kind of "boring" computational multiverse substrate. So simulating one frozen moment is infinitely more probable than simulating a whole lifetime of consciousness.
    apokrisis

    You need enormous probabilistic resources in order to realize a Boltzmann brain. AFAIK, according to mainstream science, our cosmic neighborhood is not dominated by BBs. BBs are still a threat in a wider cosmological modeling context, but if the hypothetic simulators just simulate a random chunk of space of the kind that we find ourselves in, then BBs should not be an issue.
  • SophistiCat
    2.2k
    Are you serious? Well, to give an easy example: if you would model reality with just Newtonian physics, your GPS-system wouldn't be so accurate as the present GPS system we now have, that takes into account relativity.ssu

    And if you do it with Lego blocks it will be less accurate still (funnier though). But I am not sure what your point is. Do you suppose that computers are limited to simulating Newtonian physics? (That's no mean feat, by the way: some of the most computationally challenging problems that are solved by today's supercomputers are nothing more than classical non-relativistic fluid dynamics.)

    That has to be the strawman argument of the month. Where did I say "conscious beings are outside any general order of things"?

    Definitions do matter. If we talk about Computers, then the definition of how they work, that they follow algorithms, matters too.
    ssu

    Well, the idea behind the simulation hypothesis is that (a) there is a general, all-encompassing order of things, (b) any orderly system can be simulated on a computer, and possibly (c) the way to do it is to simulate it at its most fundamental level, the "theory of everything" - then everything else, from atoms to trade wars, will automatically fall into place. All of these premises can be challenged, but not simply by pointing out the obvious: that computers only follow instructions.

    Apokrisis explains this very well on the previous pagessu

    I am not sure what that business with instability is about, but I haven't really looked into this matter. I know that the simulation hypothesis is contentious - I have acknowledged this much.
  • apokrisis
    7.3k
    ...according to mainstream science, our cosmic neighborhood is not dominated by BBs.SophistiCat

    According to mainstream science, we ain’t a simulation either. We were talking about Musk’s claim which involves “enormous probabilistic resources”. The BB argument then becomes one way that the claim blows itself up. If it is credible that some "boring substrate" generates simulated realities, then the simulation we are most likely to inhabit is the one that is the most probably in requiring the least of this probabilistic resource.

    The fact that this then leads to the BB answer - that the simulation is of a single mind's single frozen moment - shows how the whole simulation hypothesis implodes under its own weight.

    I'm just pointing out the consequences of Musk's particular line of argument. He doesn't wind up with the kind of Matrix style simulation of many fake minds sharing some fake world in a "realistic way" that he wants.

    And even if the "substrate" of that desired outcome is some super-intelligent race of alien mad scientists building a simulation in a lab, then I'd still like to know how the actual architecture of such a simulation would work.

    As I said, one option essentially recapitulates idealism, the other substance dualism. And both outcomes ought to be taken as a basic failure of the metaphysics. We can learn something from that about how muddle-headed folk are about "computational simulation" in general.
  • apokrisis
    7.3k
    I am not sure what that business with instability is about,SophistiCat

    I explained in this post how biology - life and mind - is founded on the regulation of instability.

    Biology depends on molecules that are always on the verge of falling apart (and equally, just as fast reforming). And so the hardware of life is the precise opposite of the hardware suitable for computing. Life needs a fundamental instability as that then gives its "information" something to do - ie: create the organismic-level stability.

    So from the get-go - down at the nanoscale quasi-classical scale of organic chemistry - semiosis is giving the biophysics just enough of a nudge to keep the metabolic "machinery" rebuilding itself. Proteins and other constituents are falling together slightly more than they are falling apart, and so the fundamental plasticity is being statistically regulated to produce a long-running, self-repairing, stable organism.

    The half-life of a microtubule - a basic structural element of the cell - is about 10 minutes. So a large proportion of what was your body (and brain) this morning will have fallen apart and rebuilt itself by the time this evening comes around.

    This is molecular turn-over. All the atoms that make you you are constantly being churned. So whatever you might remember from your childhood would have to be written into neural connections that have got washed away and rebuilt - more or less accurately, you hope - trillions of times.

    The issue is then whether this is a bug or a feature. Machinery-minded folk would see molecular turnover as some kind of basic problem that biology must overcome with Herculean effort. If human scientists are going to reverse-engineer intelligence, the first thing they would want to do is start with some decently stable substrate. They wouldn't look for a material that just constantly falls apart, even if it is also just as constantly reforming as part of some soupy chemical equilibrium.

    But this is just projecting our machine prejudices onto the reality of living processes. We are learning better now. It is only because of soupy criticality that the very possibility of informational regulation could be a thing. Instability of the most extreme bifurcating kind brings with it the logical possibility of its control. Instability produces uncontrolled switching - a microtubule unzipping into its parts, and also rezipping, quite spontaneously. All you need then is some kind of memory mechanism, some kind of regulatory trick, which can tip the soupy mix in a certain direction and keep it rebuilding just a little faster than it breaks down.

    So this is a fundamental metaphysical fact about reality. If you have radical instability, that brings with it the very possibility of stabilising regulation. Chaos already plants the seeds of its own ordering.

    An engineer wants solid foundations. Machines need stable parts that won't immediately fall apart. But life and mind want the opposite. And so right there you have a causal and metaphysical-level difference that artificial mind or artificial life has to deal with.

    Silicon switches are just not the right stuff as, by design, there is minimal chance of them entropically falling apart, and even less chance that they will negentropically put themselves back together.

    Yet every part of every cell and neuron in your body is doing this all day long. And knowing how to do this is fundamental to the whole business of existing as an informational organism swimming in a flow of environmental entropy.

    Life and mind can organise the material world, bend its erosive tendencies to their own long-term desires. This is the basic scientific definition of life and mind as phenomena. And you can see how machine intelligence or simulated realities are just not even playing the game. The computer scientists - playing to the gullible public - haven't got a clue of how far off they are.

    Well, the idea behind the simulation hypothesis is that (a) there is a general, all-encompassing order of things, (b) any orderly system can be simulated on a computer, and possibly (c) the way to do it is to simulate it at its most fundamental level, the "theory of everything" - then everything else, from atoms to trade wars, will automatically fall into place. All of these premises can be challenged, but not simply by pointing out the obvious: that computers only follow instructions.SophistiCat

    You see here how readily you recapitulate the "everything is really a machine" meme. And yet quantum physics shows that even material reality itself is about the regulation of instability.

    Atomism is dead now. Classicality is emergent from the fundamental indeterminism of the quantum realm. Stability is conjured up statistically, thermodynamically, from a basic instability of the parts.

    The simulation hypothesis takes the world to be stably classical at some eventual level. There is some fixed world of atomistic facts that is the ground. And then the only problem to deal with is coarse-graining. If we are modelling the reality, how much information can we afford to shed or average over without losing any essential data?

    When it comes to fluid turbulence, we know that it has a lot of non-linear behaviour. Coarse-graining can miss the fine detail that would have told us the process was on some other trajectory. But the presumption is that there is always finer detail until eventually you could arrive at a grain where the reality is completely deterministic. So that then makes coarse graining an epistemic issue, not ontic. You can choose to live with a degree of imprecision in the simulation as close is good enough for all practical purposes.

    That mindset then lets you coarse-grain simulate anything. You want to coarse-grain a model of consciousness? Sure, fine. The results might look rather pixellated, not that hi res, as a first go. But in principle, we can capture the essential dynamics. If we need to, we can go back in and approach the reality with arbitrary precision .... because there is a classically definite reality at the base of everything to be approached in this modelling fashion.

    For engineers, this mindset is appropriate. Their job is to build machines. And part of their training is to get some real world feel for how the metaphysics of coarse-graining can turn around and bite them on the bum.

    But if we are talking about bigger philosophical issues, then we have to drop the belief that reality is actually some kind of physical machine. It's causality is irreducibly more complex than that. Both biology and physics tell us that now.
  • ssu
    8.7k
    And if you do it with Lego blocks it will be less accurate still (funnier though). But I am not sure what your point is.SophistiCat
    Ok, I'll try to explain again, thanks for having the interest and hopefully you'll get through this long answer and understand it. Let's look at the basic argument, the one that you explain the following way:

    the idea behind the simulation hypothesis is that (a) there is a general, all-encompassing order of things, (b) any orderly system can be simulated on a computer, and possibly (c) the way to do it is to simulate it at its most fundamental level, the "theory of everything" - then everything else, from atoms to trade wars, will automatically fall into place. All of these premises can be challenged, but not simply by pointing out the obvious: that computers only follow instructions.SophistiCat

    Ok, the question is about premiss (b) any orderly system can be simulated on a Computer.

    Because from the definition what you yourself just said there above, it means that premiss (b) can be written as (b) any orderly system can be simulated only following instructions. And with "following instructions" we mean using algorithms, computing.

    And the following here is, and don't get carried away to other things, is plain mathematics. Yet there exists non-computable, but true mathematical objects. You can call these orderly systems etc. The math is correct, they do have a correct model of them, they aren't mystical, the only thing is that they are simply uncomputable. Now if a Computer has to compute these, it obviously cannot do it.

    So how do we ask a Computer something to what there exists a correct model, but it cannot compute it? Well, simply by a situation where the correct answer is depended on what the computer doesn't do, in other words, negative self-reference. You get this with Turing's Halting Problem. Now you might argue that this is quite far fetched, but actually it isn't when the computer has to interact with the outside World, when it has to take into account the effects of it's own actions. Now, in the vast majority of cases this isn't a problem (taking it's own effects into account on the system to be modelled). Yes, you can deal with it with "Computer learning" or basically a cybernetic system, a feedback loop.

    With negative self reference you cannot do it. And notice, you don't have to have here consciousness or anything mystical of that sort (so please stop saying that I'm implying this). What the basic problem is that as the Computer has an effect on what it is modelling, it's actions make it a subject while the mathematical model, ought to be objective. Sometimes it's possible stll to give the correct model and the problem of subjectivity can be avoided, but not with negative self reference.

    I'll give you an example of the problem of negative self reference: try to say or write down a sentence in English that you never in your life have or will say or write. Question: do these kinds of sentences exist? Yes, surely as your life as mine is finite. The thing is that you cannot say them, me or others here can do it. Computation has simple logical limits to self-reference. I can give other examples of this for example with a problem with forcasting the correct outcome when there obviously is one, but it cannot be computed.

    When you think about it, this is the problem of the instruction "do something else than what is in the instructions" for a computer. If there isn't a simple instruction on what to do when confronted with this kind of instruction, the Computer cannot do it. Because do something else is not in the instructions. Do something else means negative self reference to the instructions the Computer is following.

    Why is this important? Because interaction with the world is filled with these kinds of problems and to assume that one can mathematically compute them hence solve them by computation, follow simple instructions, is difficult when the problems start from mathematical logic itself. It's simply like trying to argue that everything is captured by Newtonian physics when it isn't so.
  • apokrisis
    7.3k
    What the basic problem is that as the Computer has an effect on what it is modelling, it's actions make it a subject while the mathematical model, ought to be objective. Sometimes it's possible stll to give the correct model and the problem of subjectivity can be avoided, but not with negative self reference.ssu

    I agree with this but would also point out how it still doesn't break with the reductionist presumption that this fact is a bug rather than a feature of physicalist ontology.

    So it is a problem that observers would introduce uncertainty or instability into the world being modelled and measured. And being a problem, @Michael and @SophistiCat will feel correct in shrugging their shoulders and replying coarse-graining can ignore the fact - for all practical purposes. The problem might well be fundamental and ontic. But also, it seems containable. We just have to find ways to minimise the observer effect and get on with our building of machines.

    I am taking the more radical position of saying both biology and physics are fundamentally semiotic. The uncertainty and instability is the ontic feature which makes informational regulation even a material possibility. It is not a flaw to be contained by some clever trick like coarse graining. It is the resource that makes anything materially organised even possible.

    Self-reference doesn't intrude into our attempts to measure nature. Nature simply is self-referential at root. In quantum terms, it is contextual, entangled, holistic. And from there, informational constraints - as supplied for instance by a cooling/expanding vacuum - can start to fragment this deep connectedness into an atomism of discrete objects. A classical world of medium-sized dry goods.

    The observer effect falls out of the picture in emergent fashion. Although human observers can restore that fundamental quantum holism by experimental manipulation, recreating the world as it is when extremely hot/small.
  • ssu
    8.7k
    I agree with this but would also point out how it still doesn't break with the reductionist presumption that this fact is a bug rather than a feature of physicalist ontology.

    So it is a problem that observers would introduce uncertainty or instability into the world being modelled and measured. And being a problem, Michael and @SophistiCat will feel correct in shrugging their shoulders and replying coarse-graining can ignore the fact - for all practical purposes. The problem might well be fundamental and ontic. But also, it seems containable. We just have to find ways to minimise the observer effect and get on with our building of machines.
    apokrisis
    You nailed it Apokrisis, this is exactly what has been done.

    It's been the assumption that with better models and in time things like this can be avoided or even solved. It simply doesn't sink in that this is a fundamental and an inherent problem. The only area where it has been confronted is in Quantum Mechanics, where nobody tells that Quantum Mechanics and relativity are totally reducible to Newtonian mechanics and that the problematic issues of QM can simply be avoided and hence we can use Newtonian mechanics.

    It really might seem containable, until you notice that since the 1970's the Computer scientists have predicted an immediate breakthrough in AI. Of course, we still don't have true AI. We just have advanced programs than can trick us from a limited point of view to think they have AI.

    I am taking the more radical position of saying both biology and physics are fundamentally semiotic. The uncertainty and instability is the ontic feature which makes informational regulation even a material possibility. It is not a flaw to be contained by some clever trick like coarse graining. It is the resource that makes anything materially organised even possible.apokrisis
    That's the basic argument in this case on the mathematical side that when something is uncomputable, you really cannot compute it. It's an ontic feature that cannot be contained with some clever trick.

    Self-reference doesn't intrude into our attempts to measure nature. Nature simply is self-referential at root. In quantum terms, it is contextual, entangled, holistic. And from there, informational constraints - as supplied for instance by a cooling/expanding vacuum - can start to fragment this deep connectedness into an atomism of discrete objects. A classical world of medium-sized dry goods.apokrisis
    And hence mathematical models don't work so well as in some other field. That's the outcome. Does there exist a mathematical model for evolution? Can Darwinism be explained by an algorithm, By a computable model? Some quotes about this question:

    Biological evolution is a very complex process. Using mathematical modeling, one can try to clarify its features. But to what extent can that be done? For the case of evolution, it seems unrealistic to develop a detailed and fundamental description of phenomena as it is done in theoretical physics. Nevertheless, what can we do?

    Evolution is a highly complex multilevel process and mathematical modeling of evolutionary phenomenon requires proper abstraction and radical reduction to essential features.

    Basically mathematical modeling is used in various ways, but there isn't the mathematical model for evolution. Now this should tell people something.
  • SophistiCat
    2.2k
    Ok, the question is about premiss (b) any orderly system can be simulated on a Computer.ssu

    Yes, after I posted that, I realized that I overreached a bit. There are indeed "regular" systems that nevertheless cannot be simulated to arbitrary precision (indeed, if we sample from all mathematically possible systems, then almost all of them are uncomputable in this sense). However, most of our physical models are "nice" like that; the question then is whether that is due to modelers' preference or whether it is a metaphysical fact. Proponents of the simulation hypothesis bet on the latter, that is that the hypothetical "theory of everything" (or a good enough approximation) will be computable.

    So how do we ask a Computer something to what there exists a correct model, but it cannot compute it? Well, simply by a situation where the correct answer is depended on what the computer doesn't do, in other words, negative self-reference. You get this with Turing's Halting Problem. Now you might argue that this is quite far fetched, but actually it isn't when the computer has to interact with the outside World, when it has to take into account the effects of it's own actions. Now, in the vast majority of cases this isn't a problem (taking it's own effects into account on the system to be modelled). Yes, you can deal with it with "Computer learning" or basically a cybernetic system, a feedback loop.ssu

    It is difficult to understand what you are trying to say here, but my best guess is that you imagine a simulation of our entire universe - the actual universe that includes the simulation engine itself. That would, of course, pose a problem of self-reference and infinite regress, but I don't think anyone is proposing that. A simulation would simulate a (part of) the universe like ours - with the same laws and typical conditions.
  • ssu
    8.7k
    Yes, after I posted that, I realized that I overreached a bit. There are indeed "regular" systems that nevertheless cannot be simulated to arbitrary precision (indeed, if we sample from all mathematically possible systems, then almost all of them are uncomputable in this sense). However, most of our physical models are "nice" like that; the question then is whether that is due to modelers' preference or whether it is a metaphysical fact. Proponents of the simulation hypothesis bet on the latter, that is that the hypothetical "theory of everything" (or a good enough approximation) will be computable.SophistiCat
    Seems to me that we are finding some kind of common ground here. Cool.

    So the point here is that you just remember that if there is one black swan, not all swans are white. But anyway, assuming they're all white doesn't lead to everything going astray as the vast majority of them are indeed white. And this is the important point: understanding the limits of the models we use gives us a better understanding of the issue at hand.

    I have found it to be very useful especially in economics because people often make the disasterous mistake of believing that the (economic) models portray reality as well as the laws of Physics do explain moving billiard balls. Believe me, I had in the late 1990's an assistant yelling at me that the whole idea of there existing or happening speculative bubbles in the modern financial markets was a totally ludicrous idea and hence not worth studying, because the financial markets work so well. The professor had to calm him down and say that this is something we don't know yet. But the assistant was great in math!

    It is difficult to understand what you are trying to say here, but my best guess is that you imagine a simulation of our entire universe - the actual universe that includes the simulation engine itself. That would, of course, pose a problem of self-reference and infinite regress, but I don't think anyone is proposing that. A simulation would simulate a (part of) the universe like ours - with the same laws and typical conditions.SophistiCat
    I think you've got it now. But it can also be far more limited in scope, not just the entire universe, just where and when the computers actions have effects that result in this kind of loop.
  • Arkady
    768
    Believe me, I had in the late 1990's an assistant yelling at me that the whole idea of there existing or happening speculative bubbles in the modern financial markets was a totally ludicrous idea and hence not worth studying, because the financial markets work so well.ssu
    Wow. And the late 1990s were the time of the dot-com bubble, so he was really missing the forest for the trees...
12345Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment