• Zelebg
    626
    1. Camera A: visual input extern -> feeds into 2.
    2. Program A: subconsciousness & memory -> feeds into 3.
    3. Display A: visual output inner -> feeds into 4.
    4. Camera B: visual input inner -> feeds into 5.
    5. Program B: consciousness & free will -> feeds into 6.& 2.
    6. Speaker: audio output extern

    This "being" is quite limited in the ways it can sense and act upon external world. However, I claim it in principle still has all the sufficient hardware to actualize consciousness greater than that of humans, not in qualia sense of the experience, but considering everything else, including free will.

    I suppose it might be questioned what exactly should "Program A" be doing and what part of that should go onto inner screen, but most objections I expect to land around "Program B", that it is not what can be called consciousness and that it can not exercise free will. I am interested to hear those arguments. What concepts will you lean on, just how exactly do you disagree?
  • Tim3003
    347
    I think that the qualia sense of experience is crucial. Without it there can be no consciousness nor free will. You need some externally derived driver for pain and pleasure. Without their stimuli the concept of 'will' is impossible to actualise. So-called free will has to choose between criteria for a decision. Ultimately, the way it decides is by weighing the pain/pleasure the different choices will entail. You could simulate pain and pleasure by alloting scores to visual cues but I think you would then produce a very odd and limited form of intelligence.
    Do you include mirophones to catch sound ? How does your AI learn language? Without it how can it conceptualise and express complex ideas?
    Actually this subject interests me too and I have thought about the practicalities of a computer based consciousness. It seemed to me you'd have to synthesise all the attributes of a human. There is no short-cut. Still, I don't want to sound dismissive.. :smile:
  • fdrake
    6.6k
    2. Program A: subconsciousness & memory -> feeds into 3.Zelebg
    5. Program B: consciousness & free will -> feeds into 6.& 2.Zelebg

    What concepts will you lean on, just how exactly do you disagree?Zelebg

    For the purposes of the intended discussion do you care about how A and B work?
  • philsterr
    6
    "Program B", that it is not what can be called consciousness and that it can not exercise free will. I am interested to hear those arguments. What concepts will you lean on, just how exactly do you disagree?Zelebg

    If you create a program that has 'conciouness' and 'free will', then it can be called conciousness and can excercise free will.

    How would you achieve that though? If your program is run on a common computer, it will boil down to a deterministic set of instructions. Its audio output will entirely be determined by its initial code and visual input history. The person who codes and interacts with it can have 100% control over its output.
    Could you call that free will?
  • Zelebg
    626

    I think that the qualia sense of experience is crucial. Without it there can be no consciousness nor free will.

    I would argue program can be made to incorporate 'qualia' properties in a sufficiently robust way that enables those concepts to interact with other concepts in the thought function of the consciousness program.

    Perhaps computer's inner representation of qualia would be in terms of pie-charts or whatever, but does it really matter if at the end it can still make all the same conclusions and express them with the same kind of semantics as humans do?


    You need some externally derived driver for pain and pleasure. Without their stimuli the concept of 'will' is impossible to actualise.

    I can imagine all my sensory inputs stop working and all I can do is speak. But I would still be able to say I'd rather have a cup of milk than punch in the face.


    So-called free will has to choose between criteria for a decision. Ultimately, the way it decides is by weighing the pain/pleasure the different choices will entail.

    Is it not sufficient to have goals? If offered choices that are not relevant to those goals, it can then choose at random, which is the only one absolutely free kind of choice. Right?


    How does your AI learn language?

    Oh, I see now where are coming from. I am talking about it in more abstract terms - what can and can not be done in principle. So in this example computer has already learned or has been programmed, and I don't want to go into those detail unless there is an argument it can not be done in principle.
  • Zelebg
    626

    For the purposes of the intended discussion do you care about how A and B work?

    Only if there is an argument such programs could not be made in principle. And I should probably mention both programs are able to modify and expand themselves & each other.
  • Zelebg
    626

    How would you achieve that though? If your program is run on a common computer, it will boil down to a deterministic set of instructions.

    The question of free will, but more generally it's the question of top-down or downward causality. Let me expand this a bit so we have wider range to pick examples from.

    Layers of existence - atom - molecule - cell - organ - organism - consciousness - ecosystem - planet - solar system... and there are two important and mysterious boundaries. First, where molecules become 'alive' as a collective in a cell, and second, where organs become 'conscious' as a collective in an organism. But our question stands before any of the layers, and the question is whether these collective entities from higher levels could be something more than just a sum of their parts - is there a point where what actually happens is no longer determined by the dynamics of the lower level elements, but instead by new emergent properties of the higher level?


    Its audio output will entirely be determined by its initial code and visual input history. The person who codes and interacts with it can have 100% control over its output. Could you call that free will?

    It all depends on the definition of 'free will'. So I can't answer your question before we settle the definition and the rest of semantics. However, I can claim it is as free as human intention can be, which means determined by such things as personality and goals, if you would agree with this?
  • DingoJones
    2.8k


    How do know that what you have created is conciousness?
  • Zelebg
    626

    Proper definition will be the judge. Agreed?

    And by proper I mean the one most of us agree on. But in any case, all arguments put forward here should basically be about some definition or another because we are talking in general terms, limits and possibilities, rather than anything specific. Although examples and comparisons can go into particular details in any of those emergent levels of existence I mentioned.

    Since my job here is to show this computer "being" indeed satisfies all the necessary definitions for my claim to be true, then maybe you would like to present the definition of 'consciousness' so we can start?
  • DingoJones
    2.8k


    However you want to define consciousness, im asking how would you know. The reason why Im asking is because it would be very difficult to do, considering how very little we actually know about consciousness. How do you know you will have replicated it in this computer when you would have no way of accounting for missing aspects/basis (because you do not even know what they are)?
  • Banno
    25.1k
    5. Program B: consciousness & free will -> feeds into 6.& 2.Zelebg

    Doesn't any one else feel uncomfortable with consciousness already being in this explanation of consciousness?

    How is this not a vicious circularity?
  • Zelebg
    626

    It's just description of the functions. It can still be questioned can ordinary computer host such program, or is there something fundamental about those functions symbols can not capture.
  • Banno
    25.1k
    It's circular. You make consciousness using consciousness.
  • Zelebg
    626

    However you want to define consciousness, im asking how would you know. The reason why Im asking is because it would be very difficult to do, considering how very little we actually know about consciousness. How do you know you will have replicated it in this computer when you would have no way of accounting for missing aspects/basis (because you do not even know what they are)?

    You can't just say Newton's laws are not quite complete description of celestial motion for no reason at all, you have to point at something even vaguely, like there is something wrong with Mercury orbit.

    You are basing your opinion on some definition of 'consciousness' where there is something unknown about it. What is it? This computer talks, sings, writes poems, knows all the internet and can answer any of your questions on any language at least at the level of Wikipedia standard. It can tell you what it wants, about its personality, its habits, likes, dislikes, wishes, dreams... we can even watch what it dreams. Damn, if it's not more conscious than me, but how much more conscious can it even be?
  • Zelebg
    626

    To rephrase, those are labels not explanations. You may question whether the label is appropriate or not.
  • Banno
    25.1k
    Then you've lost me.
  • Banno
    25.1k
    I think that the qualia sense of experience is crucial. Without it there can be no consciousness nor free will.Tim3003

    Yet how could you know that the system had such an experience?

    And if you cannot know such a thing, why doesn't that render the project infertile?
  • Zelebg
    626

    It's like you read that nubered list as arguments, but that's just a list of hardware components. I'm just saying a PC can be programmed to be conscious, self-aware and free willing. I'm not trying to explain anything until someone points to something that needs explaining.
  • Banno
    25.1k
    Ah. I was misled by the title.

    As you were.
  • Banno
    25.1k
    More generally, that's why qualia are petty darn near useless.
  • Banno
    25.1k
    I would argue program can be made to incorporate 'qualia' properties in a sufficiently robust way that enables those concepts to interact with other concepts in the thought function of the consciousness program.Zelebg

    Again, how would you every know that a program incorporated qualia?
  • Zelebg
    626

    Yet how could you know that the system had such an experience?

    We can code it into the program and so we can be certain it has it. Can we not? It also has that inner screen, so I can say there it is qualia right there even we can see, unlike my qualia which you can not.
  • Zelebg
    626

    Ah. I was misled by the title.
    With title I wanted to suggest this particular arrangement of hardware components is important to achieve all that.
  • Banno
    25.1k
    We can code it into the program and so we can be certain it has it.Zelebg

    Ah, I see, so you think that you can create consciousness by creating a screen for your homunculus!

    The errors are compounding here, I think. But I will re-read what you have written, with this new version in mind.
  • Banno
    25.1k
    So now I have an image of your homunculus being conscious because inside it is another homunculus, which is conscious because inside it...

    Is that what you have in mind?
  • Zelebg
    626

    Again, how would you every know that a program incorporated qualia?

    Maybe you mean it is fake if computer's inner representation of qualia is, say some list of pie-charts? I'd say no, because your electro-chemical representation of qualia hardly can be expected to make any more sense. And, actual meaning may not be embedded directly in the lower level representation of the concept definition, but calculated relative in connection to all other stored concepts.

    This "relative" meaning then may be the same kind of 'feeling' about the same qualia as that of human, even though extracted from different hardware using different symbolization. But I don't think any of it matters if the machine can draw from those concepts, however internally represented, exactly the same conclusions as we do.
  • Banno
    25.1k


    See if I can register my objection by giving a simplified version of your machine; one that is intended to understand an utterance by translating it.


    1. Microphone A -> feeds into 2.
    2. Translator A: feeds into 3.
    3. Speaker A: -> feeds into 4.
    4. Microphone B: -> feeds into 5.
    5. Translator B: -> feeds into 6.& 2.
    6. Speaker: audio output extern

    So a sentence in English is translated into some other language and then back into English, but included is the reflexive link from 5 to 2.

    Would anyone suppose that this device understood the sentences it translated?

    I think the situation with your device is the same.
  • Zelebg
    626

    Is that what you have in mind?

    I think it can be argued without it, but yes, little driver seat for consciousness, all with joystick and tiny little monitor so it can play itself as it likes. Isn't that exactly how it feels? It's interesting parallel in any case, and I do not see where the analogy breaks.
  • Zelebg
    626

    Would anyone suppose that this device understood the sentences it translated?

    Great question. And again to answer it we must first talk about some definition, in this case what "to understand" means. Would you care to define it?
  • petrichor
    322


    Computers just execute instructions that are really themselves just high-level calls for packaged low-level logic gate operations on bits, the bits themselves only being meaningful to the human observers who assign meaning to them.

    You have to explicitly tell the computer what to do at each step. No hand-waving allowed. Suppose you want the computer to feel pain. You can't just write a program that says the following:

    if condition X is true, feel pain

    How would you go about writing the actual instructions for feeling pain? What are the step by step instructions which, if followed by a machine incapable of feeling, will cause it to feel, and to feel pain?

    Let's have it. How to suffer: Step 1...
  • Banno
    25.1k
    And again to answer it we must first talk about some definition, in this case what "to understand" means.Zelebg

    I don't see why.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.