• Marchesk
    2.5k
    If they were truly indistinguishable, though, they'd just be other humans (not "machines," if by that is meant something non-human). So that amounts to not much.The Great Whatever

    Not indistinguishable, but rather fully capable. Data wouldn't pass a Turing Test (too easy to tell he is a machine the way he talks), but he is conscious.
  • Wosret
    3.2k


    No, it isn't, and I actually think that it will be more difficult for A.I.s to be able to parse natural language with an active physical world than merely abstractly in a conversation. I still don't take seriously such movies, which merely create the universe, and make whatever true they want, regardless of real world feasibility, and as Aristotle would tell you, rendered false the moment they're taken as true, and only retain their truth to the extent that they're understood to be false.
  • Marchesk
    2.5k
    Clearly a fool proof test. We don't seriously question whether or not other people have minds, until the question is brought up, mainly because of their history, and origin, more so than their functionality.Wosret

    Neither movie presents a dumbed down Turing Test. Anyway, there's plenty of examples in literature and movies. Some of the machines are very human like, and some are very machine-like, but they all possess a deep understanding of the first person (meaning they're doing more than mimicking), because they're all conscious.
  • Marchesk
    2.5k
    Movies are just a means of discussing the p-zombie/consciousness question put forth in the OP. The scenarios are fictional, but so is p-zombieland.
  • Marchesk
    2.5k
    What Chalmers does is imagine that you can subtract consciousness and behavior will remain the same, because physicalism can account for all behavior. I think that's deeply problematic.
  • Wosret
    3.2k
    I don't know about p-zombies, but I do think that we need to be clear about what consciousness is in order to be clear about identifying it -- and all I attempted to do was give reasons why I thought that the examples failed to give knock down demonstrations that couldn't be explained in other ways.
  • Marchesk
    2.5k
    But the complaint you and TGW lodged against movie/tv scenarios is that they're just fictional worlds with conscious machines (actors playing those roles), which wasn't exactly my point. It doesn't matter how unconvincing Johansen might have been as a disembodied AI. What matters is how the AI behaves throughout the movie (at least conceptually), which obviously far exceeds the intent of the programming. I'm going to guess that the company responsible for that version of the operating system was in serious financial trouble after the events portrayed at the end of the movie. And I'm also going to guess that neither human in Ex Machina anticipated the final result of their tests, unfortunately for them.
  • Wosret
    3.2k
    That wasn't my objection, per se, as much as regardless of the criteria presented as qualifying something as a conscious A.I. or just a robot, it is a matter of plot whether or not they actually are conscious, or not, and thus the criteria presented are superfluous, or arbitrary. I at one point criticized the ones that make them just conscious somehow after "evolving" or getting more processing power or whatever as more fantasy than sifi, but my point was more that movies are deceptive. The two movies you were talking about, whether or not the A.I.s are conscious come down to the plot, and not the criteria, or qualities presented as justifying it. I agree with TGW, that based on this, movies will of course be deceptive in what kind of conclusion they wish the audience to draw about this, but was more focused on showing that the proposed qualities do not necessarily imply what they're suggested to in movie plots.
  • Marchesk
    2.5k
    Alright, machines and fictional stories are just a tool to explore the p-zombie question, which is partly one about the conceivability of identical behavior absent consciousness. Some people think it's conceivable, because they can imagine a person (or machine) behaving exactly the same, yet being 'all dark inside'. I think that's probably mistaken, because one isn't taking into account to what extent consciousness plays a role in behavior.
  • Wosret
    3.2k
    You keep coming back to this, but I've claimed nothing of the like. As I already stated to Mongrel, my very mention of p-zombies was merely a joke. My only claim is that consciousness cannot be clearly identified with the specific qualities mentioned, or, rather, most things can convincingly be faked. I am of the opinion that there would be chinks in the armor, and that constructing genuinely conscious A.I. is not likely to ever happen. It seems to me, rather, that those that wish to say that it is likely to happen, just wish to set the bar for identifying an A.I. as conscious rather low. Or, the pro-A.I crowd is just super ready to make with the false positives.
  • darthbarracuda
    2.9k
    @Wosret Tell me all about it, I'm a zombie, and only pretend to understand.

    Dennett would have laughed.

    I think the experience of qualia is directly related to consciousness, as in, consciousness is a necessary prerequisite for qualia.

    However, Hume argued that we are qualia - the self is a conglomeration of sensory inputs. I have to ask what experiences these sensory inputs, though. For if there is nothing to experience these inputs, they aren't qualia, they are just electrical impulses.
  • Harry Hindu
    2k
    I don't see how we couldn't eventually create a machine that is conscious. It simply needs to represent the world with some model and use that model to make decisions to achieve some goal. Since consciousness is a representation of my attending the world, any kind of representation will do. It doesn't matter what form the representation takes, only that the representation is consistent. We could each experience different colors when we observe the world, but as long as each individual experiences the same color from the same effect, then why would it matter? We'd still be able to communicate our experiences and no one would be the wiser as to what forms the representations in our minds take.

    So a computer that represents wavelengths of light, vibrations of air molecules, chemical inputs, etc. consistently, and all at once, would in effect be conscious.

    Carrying on a intelligible conversation isn't a measuring stick for consciousness. If it were, then children, say below the age of 10, and some adults (just look at Facebook), aren't conscious. Those that speak a different language wouldn't be conscious. Carrying on an intelligible conversation requires learning the language, and we all have made mistakes using our native language and our mistakes is what makes us learn to use the language more intelligibly. Teaching and programming are basically the same thing. We re-program ourselves when we make mistakes. Computers need programmers to update their software and eventually computer and software engineers will design a computer that can re-program itself.
  • Wosret
    3.2k
    I think the experience of qualia is directly related to consciousness, as in, consciousness is a necessary prerequisite for qualia.

    However, Hume argued that we are qualia - the self is a conglomeration of sensory inputs. I have to ask what experiences these sensory inputs, though. For if there is nothing to experience these inputs, they aren't qualia, they are just electrical impulses.
    darthbarracuda

    That is oddly worded, the whole "sensory input" thing, sounds as if we're sitting in a dark room sending out outputs, and receiving inputs via our organs, and thus interacting with the world.

    I do think that qualia is a necessary prerequisite for consciousness, but I'm not sure of the inverse. It isn't clear to me that some animals that definitely have sense are conscious, though I do believe that everything that is conscious has sense.
  • Wosret
    3.2k


    I don't know what you mean by the representational thing, sounds like representational realism as a theory of consciousness, and if so then you just blew the representation I have of my mind! I don't see how they're related... but isn't it the case that a digital camera represents photos as digits in a storage drive? Are digital cameras conscious?
  • Marchesk
    2.5k
    Would be interesting if some panpsychist wrote a first person story from the the POV of a rock.
  • Michael
    7.8k
    I think that's probably mistaken, because one isn't taking into account to what extent consciousness plays a role in behavior. — Marchesk

    That consciousness plays a large role in the behaviour of conscious things is not that such behaviour is (necessarily) unique among conscious things.

    Is there any evidence or reasoning to suggest that human-like behaviour (including conversion) cannot be explained by non-conscious physical influences (or that consciousness is a necessary by-product of such non-conscious physical influences)?
  • Marchesk
    2.5k
    Is there any evidence or reasoning to suggest that human-like behaviour (including conversion) cannot be explained by non-conscious physical influences (or that consciousness is a necessary by-product of such non-conscious physical influences)?Michael

    That nobody has been able to come up with a convincing physical or non-conscious explanation for consciousness, and philosophers such as Chalmers, Nagel, McGinn have provided fairly strong reasons for why all such attempts are doomed to fail, despite the efforts of Dennett and company.

    As I see it, the explanatory gap arises because we start by abstracting objective properties from the first person perspective, such as number, shape, extension. And that has worked really well in science. But then we turn around and ask how those objective properties give rise the subjective ones that we abstracted away from, such as colors, smells, pains, etc. And there just isn't a way to close that gap, other than as a correlation. Brain state ABC correlates with feeling XYZ. But why? Nobody can say convincingly.

    The result is 25 (throwing out a number) different possible explanations ranging from it being an illusion to everything being conscious. Of course one can take the idealist route and dispense with the problem, but at the cost of eliminating the third person properties as being objective, by which I mean mind-independent, despite appearances to the contrary (for us realists anyway). Of course if idealism was universally convincing, this wouldn't be a philosophical issue. But it's not. I would venture to say that realism is more convincing to a a majority of people.

    And so it will probably continue to be argued going forward, despite whatever progress neuroscience makes. The correlations will be stronger of course, but it's unlikely anyone will be able to answer why it's not all dark inside. Of course that lends credibility to Chalmers' arguments, but I'm not convinced by his either.
  • TheWillowOfDarkness
    1.8k
    And there just isn't a way to close that gap, other than as a correlation. Brain state ABC correlates with feeling XYZ. But why? Nobody can say convincingly. — Marchesk

    For good reason: there isn't a "why." Brains are not a description of explanation of feelings and vice versa. They always fail to account for each other because they are distinct states. The mistake was to propose the account for each other in the first place. No "gap" exists because each has no role in explains the other. The entire approach to consciousness which understands it something to be explained by logic (the meaning of other states) is flawed. It ignores exactly what states of consciousness are: their own state of existence.

    Experiences aren't "subjective." Like any state of the world, they are their own state, "objective" and within the realm of language (like any state of the word). They are even "mind-independent": the presence of an experience doesn't require someone be aware of that experience. I can, for example, feel happy without being aware I am experiencing happiness. It doesn't take me thinking are talking about my own happiness for me to be happy. it just requires the existence of a happy state.

    All the consternation over "first person" and "third person" is nonsensical. The controversy over "what is it like to experience" is one giant category error. By definition, the being of an experience is distinct from any description we might give, so to attempt "first person" description is to literally try to turn language about something into the state being described. Is it any wonder it always fails?it is exactly what language never is.

    So when we are asked: "But what is it like to be a bat (or bee, or rock)?" the question is really asking us to be the bat. Only then, it is assumed, can we understand the experience of a bat (or bee, or rock). It is an incoherent argument which makes a mockery of language and description. The very point of language, of description, is that it is an expression of meaning which is not the thing described. To understand something is, by definition, not to be the thing you know in your present state, but be aware of it anyway. The absence of "first person" IS understanding (even within the one individual: if I understand that I am making this post, then my being has changed from making the post to a state of knowing about making the post. Making the post has been lost to my "first person." It is nowhere in this state of knowing about making the post. I am distant from it).

    In other words: the "gap" argument utterly misunderstands what states of awareness are. It proposes to understand involves being what is understood, as if knowledge, awareness or understanding something constituted its existence. It is no coincidence the obsession for the authenticity of "first person" is offered by the idealists. It is the ultimate expression of their position: (only) experience as existence.
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment