• Terrapin Station
    13.8k
    I would be surprised if you didn't already know about Chalmer's Hard Problem of Consciousness and the various arguments involved:Amity

    Yes of course. The first step in tackling "the hard problem" is setting out our criteria for explanations in a way that (a) the things we consider explained fit our criteria, (b) the things we consider not explained are not explained because they don't fit our criteria, and (c) our criteria are fashioned in a manner where anyone (reasonably educated/competent), or even perhaps a well-programmed computer, could check whether a putative explanation counts as a legitimate explanation under our criteria, so that we can't just willy-nilly declare things to be explained or not.
  • hypericin
    1.6k
    I think the puzzle illustrates the breakdown of the concept of self as transcendent and persistent, absent a soul.

    If you admit to souls, the problem is merely theological: does the soul find the new, teleported body, or doesn't it?

    Without souls, it all seems to become a matter of opinion. Either you call the teleportee the same as the teleported, or you don't, it is just an intellectual preference.

    But this does not fit with the ontological stakes of the problem. My personal persistence seems to be an ontological question. If it is not ontological, if it is merely a matter of opinion, then this ontoloigical sense of self must be an illusion.
  • TogetherTurtle
    353
    'Tries to' and 'in one aspect ' being the operative words here. Human consciousness is more than neurons firing.Amity

    How do you know that consciousness is more than neurons firing? I would think that we should start small but start to teach machines more complex subjects as time goes on and our understanding grows.

    The thing is - there would be no awareness and no sense of being bold. No sense of accomplishment.
    That is the difference in type of consciousness and yes, we would not necessarily wish to burden a computer with what it means to be a human.
    Amity

    Why would there not be a sense of boldness? Wouldn't a sense of boldness make the machine work better under harsher circumstances? If we can make machines learn differences between images and make their own, why can't we make them learn differences between emotions and make their own?

    How are we aware ?
    Well, that is the question of consciousness addressed by various disciplines.
    Amity

    I think that most people agree that we don't actually know. We only have ideas. If or when we discover the answer to this question, I think that as long as nothing supernatural is involved, we can harness the power of such things and recreate them in simulations.

    Well, we can hope that we are doing the best we can but we don't know that we are.
    To hope is to be human.
    Amity

    We truly must always be on the move, always looking for better ways when all signs point to nothing existing. I don't think that will ever change. Even if we discover everything in the universe and how it works, we must still look for new things because I don't think there will ever be a sign saying that we have found everything. The machines we create to help us in this will probably need to feel hope. The machines we make to do other things might not.

    Do you know of the Turing test? Essentially, you are put in a room with two monitors. You type in a question, and that question goes to recipients in two separate rooms. In one room is a person, and their answer appears on one monitor, and in the other room is a self-learning AI, and their answer appears on the other monitor. If you can tell the difference between the two, the computer loses. If you can't, the machine is indistinguishable from a human. Of course, you would need to ask many questions, but if the person asking them can't tell the difference at the end of the day, then how do you justify the machine not being both aware but also human?
  • Amity
    5.3k
    The machines we create to help us in this will probably need to feel hope. The machines we make to do other things might not.

    Do you know of the Turing test? Essentially, you are put in a room with two monitors. You type in a question, and that question goes to recipients in two separate rooms. In one room is a person, and their answer appears on one monitor, and in the other room is a self-learning AI, and their answer appears on the other monitor. If you can tell the difference between the two, the computer loses. If you can't, the machine is indistinguishable from a human. Of course, you would need to ask many questions, but if the person asking them can't tell the difference at the end of the day, then how do you justify the machine not being both aware but also human?
    TogetherTurtle

    Even if it were possible why would we need to instil hope, or emotion, into a computer. If some computers are given varying degrees of humanity, the situation would quickly evolve into a state of competition, each tribe having their own territory or interest. Robot wars.

    A form of The Turing test happens every day. Think computer-generated spam, chat bots and the need for us to prove we are human as part of online security.

    I think the larger question in all of this is: What does it mean to act human ?

    More and more we are using computer assessment tools to make decisions, the results of which can seriously affect someone's life. Quantitative check boxes while helpful can only go so far. The person administering should also have life experience, qualities of empathy, compassion and common sense.

    I don't have to justify anything about a computer and how human it might be no matter how well it manages to deceive a questioner. We can be deceived by chatbots. That still doesn't make them human.

    OK, enough input/ output for me, I think. I'll leave you with...

    In the beginning the Universe was created.
    This had made many people very angry and has been widely regarded as a bad move.” 
    ― Douglas Adams, The Restaurant at the End of the Universe
  • TogetherTurtle
    353
    Even if it were possible why would we need to instil hope, or emotion, into a computer. If some computers are given varying degrees of humanity, the situation would quickly evolve into a state of competition, each tribe having their own territory or interest. Robot wars.Amity

    It would be useful in life or death situations. If a robot has a choice is saving a human from a fire or itself, empathy would be useful. Sure, the computer is more useful alive, but the machine's job might be to save people from fires. If a computer is trying to teach children how to spell, then patience would be required. If a computer is only concerned with productivity (which they often are) then children with learning disabilities would be sent from the classroom.

    Robot wars are interesting, but I ask you to consider our current situation. We have wars between humans all the time, but we can also make peace. I think that it is in our best interest to make peace, whether the possible aggressors are biological or synthetic.

    A form of The Turing test happens every day. Think computer-generated spam, chat bots and the need for us to prove we are human as part of online security.Amity

    But do you actually think that Suzy is real when she tells you to go to her totallylegitnudephotots.com account? I would hope you don't, so Suzy doesn't pass the Turing test.

    I think the larger question in all of this is: What does it mean to act human ?Amity

    That is the big one.

    More and more we are using computer assessment tools to make decisions, the results of which can seriously affect someone's life. Quantitative check boxes while helpful can only go so far. The person administering should also have life experience, qualities of empathy, compassion and common sense.Amity

    Wouldn't automated systems be better if they could more accurately mimic people with life experience, empathy, and compassion?

    I don't have to justify anything about a computer and how human it might be no matter how well it manages to deceive a questioner. We can be deceived by chatbots. That still doesn't make them human.Amity

    As I said above, you really shouldn't be deceived by modern chatbots. Well, at least you aren't deceived by the ones you know about. I could be a bot you know. That would explain why I'm so adamant about my own consciousness.

    OK, enough input/ output for me, I think. I'll leave you with...

    In the beginning the Universe was created.
    This had made many people very angry and has been widely regarded as a bad move.” 
    ― Douglas Adams, The Restaurant at the End of the Universe
    Amity

    I understand the wish to limit a discussion. Sometimes I wish I could just walk away from things so I'm more than willing to display the empathy required for that. I hope you wish to pick up this line of thought in the future though, it seems useful to me.
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.