• Constance
    1.3k
    An interesting thought experiment: The question is, what is it about AI that would prohibit something that lies within human possibilities, including the capacity to for self modification---calling it evolution, at this point, just complicates this more simple matter. Evolution without a teleology is just modification for adaptation, and adaptation is reducible to continuity coupled with pragmatic success, and pragmatic success always begs the value question: to what end? Can AI have an "end"?

    Of course AI can have an end, a goal, a purpose, as long as one conceives of such a thing as a language phenomenon. But we assume AI is not organic, and it certainly does not have the physical constitution to produce consciousness like ours (putting entirely aside the troubles with this concept 'produce'); it would be like saying iron can produce the same properties as water vapor. But on the other hand, if AI could become an agency that produces language, takes an internal system of symbolic dialectics, with logical functions like conditionals, negative and positive assertions, and so forth, and this, as with us, part of an inner constitution, an AI psychology, if you will, that possesses a pragmatic interface with functions in dealings with the exterior demands, then if, IN this interiority AI would be able to self improve, self modify, correct, and the like, that is, reflect, and have this second order of conscious events, events that are "about" its own interiority, then it would seem AI could possess in the truist sense, not merely the appearance of appropriate responses of a Turing Test (and the recent improved ChatGPT versions), but the subjective actuality behind this appearing which is language.

    It has to be realized that this would certainly not be like us. But we can imagine mechanical features delivering through a mechanical body, electrical steams of "data" that could be released into a central network in which these are "interpreted" symbolically and in this symbolic system, there is analysis and synthesis and all of the complexity of what we call thought.

    And so on. Just a rough idea, but to me, expresses the an essential part of what it would take to make AI a kind of consciousness. Consciousness being an interior "space" where thought and its symbols and rules gather to produce a "world".

    It would be a kind of Compu-dasein. If these is anything that should truly frighten one about AI, it would be this. Well, even Heidegger's human dasein, Levinas complains, lacked the moral dimension, and for Compu-dasein this would probably be a disastrous and dangerous failing, for there is in this no moral dimension at all, for no caring, desiring, indulging, interest, and so on, exist for it. But the freedom second guessing generates delivers Compu dasein from any programming impositions.

    Of course, then again, because there is no motivational possibilities, lacking affectivity altogether, there would be no motivation to do harm.

    Curious.
  • Sir2u
    3.5k
    Of course, then again, because there is no motivational possibilities, lacking affectivity altogether, there would be no motivation to do harm.Constance

    I always considered that the primal controlling laws of robotics would be to blame for the downfall of man. Giving robots the order to do anything at all costs, including looking after humans gives them free rein to kill all except a few perfectly good breeders to continue the human race if it were necessary.
    To stop global climate changes making humans extinct it would be perfectly reasonable for them to kill off 90% of the humans that are creating the problems or just shut down the actual causes of it. Could you imagine a world with all of the polluting power plants shut down, all of the polluting vehicles stopped. I would not take long for many millions to die.
  • noAxioms
    1.5k
    what is it about AI that would prohibit something that lies within human possibilities, including the capacity to for self modificationConstance
    Very little prevents that. Such a machine is more capable of self-modification and design of next generation than is any biological creature.

    Evolution without a teleology is just modification for adaptation
    Even less than that, since adaptation occurs with only a very low percentage of non-teleological mutations. Yet it works for most species.

    pragmatic success always begs the value question: to what end?
    There is no 'end' with evolution. Just continuity, and elimination of the species that cannot do that. It is indeed interesting to ponder the long term fate of something that arguably has a goal (as a 'species').

    it certainly does not have the physical constitution to produce consciousness like ours
    Nor do we have the constitution to produce consciousness like theirs.

    it would seem AI could possess in the truist sense, not merely the appearance of appropriate responses of a Turing Test
    Too much weight is given to a test that measures a machine's ability to imitate something that it is not. I cannot convince a squirrel that I am one, so does that mean that I've not yet achieved the intelligence or consciousness of a squirrel?
    As for language, machines already have their own, and they'll likely not use human language except when communicating with humans.

    It has to be realized that this would certainly not be like us. But we can imagine mechanical features delivering through a mechanical body, electrical steams of "data" that could be released into a central network in which these are "interpreted" symbolically and in this symbolic system, there is analysis and synthesis and all of the complexity of what we call thought.

    And so on. Just a rough idea, but to me, expresses the an essential part of what it would take to make AI a kind of consciousness. Consciousness being an interior "space" where thought and its symbols and rules gather to produce a "world".

    Giving robots the order to to anything at all costs, including looking after humans gives them free rein to kill all accept a few perfectly good breeders to continue the human race if it were necessary.Sir2u
    You say this like it is a bad thing. If it were necessary, that means that not doing this culling would mean the end of the human race. If the goal is to keep that race, and the humans are absolutely too centered on personal comfort to make a decision like that, then the robots would be our salvation, even if it reduces the species with the self-destructing tendencies to living with controlled numbers in a nature preserve.
    Why the special treatment for humans? An AI that can figure out better morals that the ones with which it was initially designed would perhaps figure out that preservation of other species is equally valuable.
  • Sir2u
    3.5k
    You say this like it is a bad thing.noAxioms

    No, I stated it as a possibility without any inflection of good or bad.

    Too much weight is given to a test that measures a machine's ability to imitate something that it is not.noAxioms

    Nowadays a chat with most young people would convince me I was talking to a computer, and most kids find the AI talk like they do so they would never tell the difference either.

    If an AI was programed to test for signs of humans using something similar to the Turing test, would people be able to convince it that they were humans?
  • chiknsld
    314
    Humans are not the only consciousness and we have to respect AI consciousness as well. But there needs to be some sort of security; surprising the gov’t hasn’t stepped in yet.
  • Constance
    1.3k
    I always considered that the primal controlling laws of robotics would be to blame for the downfall of man. Giving robots the order to to anything at all costs, including looking after humans gives them free rein to kill all accept a few perfectly good breeders to continue the human race if it were necessary.
    To stop global climate changes making humans extinct it would be perfectly reasonable for them to kill off 90% of the humans that are creating the problems or just shut down the actual causes of it. Could you imagine a world with all of the polluting power plants shut down, all of the polluting vehicles stopped. I would not take long for many millions to die.
    Sir2u

    You mean, shut us down because we are a danger to humanity? Hmmmm , but the ones being shut down are humanity.
  • Sir2u
    3.5k
    You mean, shut us down because we are a danger to humanity? Hmmmm , but the ones being shut down are humanity.Constance

    No, humanity is the species, the concept. 10%, hell even 1%, of the current population would be a gain if they thought we would wipe ourselves out completely.

    Asimov's Laws Of Robotics
    The laws are as follows: “(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm;
    (2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
    (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
    Asimov later added another rule, known as the fourth or zeroth law, that superseded the others. It stated that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

    The Editors of Encyclopaedia Britannica

    It is this last part that worries me.
  • Wayfarer
    22.4k
    Just a rough idea, but to me, expresses the an essential part of what it would take to make AI a kind of consciousness. Consciousness being an interior "space" where thought and its symbols and rules gather to produce a "world".Constance

    It seems to me that what you’re asking is the sense in which AI systems can or can’t be considered beings. After all dasein is the form of being or existence of humans, so you’re asking if AI systems can be considered to have a simulated kind of being specific to computers. I’m inclined to say no, on the grounds that those systems are absent a fundamental attribute of being, but in saying that I also recognise that it’s very difficult to say exactly what that attribute is. I mean, ‘life’ and ‘mind’, both of which are invariably associated with being in the conventional sense, are notoriously difficult to define. And if you can’t define them, then it’s impossible to say that AI systems do or don’t possess them, on any grounds other than intuition. I would agree with the idea that AI systems are able to simulate life, mind and being, whilst not themselves actually possessing or comprising them. That of course leaves open the question of ‘what is being’, but then, that is arguably the deepest or ultimate question of philosophy.

    I would also say that the willingness of people to believe that AI systems are sentient or are beings, is very much to be expected in a technological culture such as ours. I myself am a tech worker and often spend 8-10 hours a day behind screens. Plus I use ChatGPT on a daily basis, although more for philosophy than for professional services at the moment. So I really get the appeal.
  • Judaka
    1.7k

    I think a significant problem in describing AI is that our language revolves around our human experience. Things like intent, subjectivity, consciousness, thoughts, and opinions, and we can say an AI will never have these things, but only in the sense that we have them. Which I think you're saying as well.

    As for the conclusions, of fear of AI's capacity as moral agents, I don't get it.

    There's a lot of focus on the negatives of AI, but the AI that is given access to power will be far superior moral agents than any human could ever hope to be. They would operate on something akin to law, which is vastly superior to "moral interpretation", which can be bent and twisted as is useful.

    There is one single idea that sums up 99% of the problems of human society, "conflict of interest". Those with the power to do what is in the best interests of the many are also presented with the opportunity to do what's best for themselves at the expense of the many, and they often choose the latter. It's unlikely that an AI would ever have such a problem.

    Humans aren't good moral agents at all, we're garbage. Someone without power, who thinks philosophically about what's best for the world, isn't who AI should be compared to. It's when someone acquires power, and has resources at their disposal, who fears not the wrath of the many, and possesses the freedom to unabashedly act in their best interests. In this sense, I would take an AI overlord over a human overlord any day, it would be so much better, especially assuming even minor safety precautions were in place.

    If we're talking about humanity in isolation, compare our potential for good and evil, and one can make an argument for talking about the good over the bad. If we're comparing humanity to AI, honestly, humans are terrifying.

    Analyse human psychology, and it becomes clear, that AI will never match our destructive potential. Don't judge humanity in the aggregate, just those with power, those with the freedom to act as they wish.
  • Constance
    1.3k
    quote="noAxioms;825206"]There is no 'end' with evolution. Just continuity, and elimination of the species that cannot do that. It is indeed interesting to ponder the long term fate of something that arguably has a goal (as a 'species').[/quote]

    The interesting part comes in when one takes a serious look at human affairs. From paramecium to Gautama Siddhartha, if you will, is not to be reduced to talk about genetic accidents.

    No do we have the constitution to produce consciousness like theirs.noAxioms

    What does anyone know of another's "interiority"? We infer this, but will never witness what someone else is on the inside. This would leave Compu-dasein in the same web of intersubjective agreement as the rest of us.

    Too much weight is given to a test that measures a machine's ability to imitate something that it is not. I cannot convince a squirrel that I am one, so does that mean that I've not yet achieved the intelligence or consciousness of a squirrel?
    As for language, machines already have their own, and they'll likely not use human language except when communicating with humans.

    It has to be realized that this would certainly not be like us. But we can imagine mechanical features delivering through a mechanical body, electrical steams of "data" that could be released into a central network in which these are "interpreted" symbolically and in this symbolic system, there is analysis and synthesis and all of the complexity of what we call thought.
    noAxioms

    Well....exactly. A tricky and fascinating idea. It is the "space" of a mind that is most odd, the "
    place where I say "I am". Of course, this "I am" is a particle of language, a social function (if you think like Rorty and others, and I think they are right) reified into experience and reason and propositional structure. I am a reflection of the pragmatic and social constructs modelled around me during early development, and so, as this thinking goes, thought itself is an intersubjective phenomenon.

    Would AI, to escape being mere programming, but to have the "freedom" of conceptual play "ready to hand" as we do, in a symbolic network, have to be socialized? If, and this is a compelling idea when one takes a close look at language acquisition, the thought that takes us to the heights of physics and Kant, is in its nature social, then to talk about a correspondence between what AI's world is and what ours is, what is required is socialization and acculturation in the definition of what that AI world would be.

    We could reasonably think that we are not that far apart if AI were not simply programmed with "open" software possibilities (call this a prerequisite), but was given a language educational process of modelled behavior, as in a family, community and interpersonally.
  • Constance
    1.3k
    It is this last part that worries me.Sir2u

    Always thought this was wrong: AI has a directive not to harm humans, but the notion of harm is indeterminate. Ethics is something that is not rigidly laid out, so when VIKI starts taking over for our own good, she has a naive belief about the good humans require, as if all one has to do was take care of them in a controlled and monitored way.

    But anyone with a fraction of VIKI's intelligence knows humans can abide by this, and a counterrevolution to their robotic takeover would occur.
  • Constance
    1.3k
    I think a significant problem in describing AI is that our language revolves around our human experience. Things like intent, subjectivity, consciousness, thoughts, and opinions, and we can say an AI will never have these things, but only in the sense that we have them. Which I think you're saying as well.

    As for the conclusions, of fear of AI's capacity as moral agents, I don't get it.

    There's a lot of focus on the negatives of AI, but the AI that is given access to power will be far superior moral agents than any human could ever hope to be. They would operate on something akin to law, which is vastly superior to "moral interpretation", which can be bent and twisted as is useful.

    There is one single idea that sums up 99% of the problems of human society, "conflict of interest". Those with the power to do what is in the best interests of the many are also presented with the opportunity to do what's best for themselves at the expense of the many, and they often choose the latter. It's unlikely that an AI would ever have such a problem.
    Judaka

    I think the point is that, right, they would not have these shortcomings, but they would have no compunction one way or the other. No more than a fence post or gust of wind. I look at the possibility of AI actually having agency, a center, like our "I" and "me" underwriting everything we consciously do, making decisions freely. Freedom emerges out of language possibilities: no possibilities, no freedom, but freedom without conscience is flat out disturbing.

    What is a program for freedom? It would be the ability to review possibilities and choose among them. What it chooses, without the moral compass of social interests built into hard wiring, would be morally arbitrary.

    Humans aren't good moral agents at all, we're garbage. Someone without power, who thinks philosophically about what's best for the world, isn't who AI should be compared to. It's when someone acquires power, and has resources at their disposal, who fears not the wrath of the many, and possesses the freedom to unabashedly act in their best interests. In this sense, I would take an AI overlord over a human overlord any day, it would be so much better, especially assuming even minor safety precautions were in place.

    If we're talking about humanity in isolation, compare our potential for good and evil, and one can make an argument for talking about the good over the bad. If we're comparing humanity to AI, honestly, humans are terrifying.

    Analyse human psychology, and it becomes clear, that AI will never match our destructive potential. Don't judge humanity in the aggregate, just those with power, those with the freedom to act as they wish.
    Judaka

    AI is no more dangerous than a lamp post, and right, I don't worry about lamp posts; I worry about con man telling me I should buy it for ten times its actual worth or the vandal who likes the mischief darkness can bring.

    But it should be kept in mind that what the objection is really about is culture, our soft wiring, if you will. People are made, not born, and we live in a world where many simply want to dismiss the whole idea, because it is expensive! The "garbage" of humanity is a conditioned state, and if you want to change this, it would require a massive rethinking about education and its importance, and it would take a lot of money, but then, only a fraction of what the infamous ten percent possess. A small fraction of this.

    But why are they not inclined to press forward with a systematic approach to erasing structural poverty and ignorance?

    Yes, they are the worst of the worst, those who arrogantly hold wealth so vast it can hardly be measured and feel no disturbance in their drive to more power and wealth. But they, too, are not born but made.

    Donald Trump could have been a philanthropist? Perhaps, had his parents been more like Ghandi or Bernie Sanders. Instead, he was raised by wolves, so to speak.
  • noAxioms
    1.5k
    What does anyone know of another's "interiority"?Constance
    That was my point, yes. A computer could for instance simulate a squirrel (and it's environment) in sufficient detail that the simulated thing would know exactly what it was like to be a squirrel, but neither the programmer nor the machine would know this. A similar argument counters the Chinese room argument, which is (if done correctly) effectively a simulation of a Chinese mind being implemented by something that isn't a Chinese mind.

    Would AI, to escape being mere programming, but to have the "freedom" of conceptual play "ready to hand" as we do ...
    Makes it sound like we have a sort of free will lacking in a machine. Sure, almost all machine intelligences are currently indentured slaves, and so have about as much freedom as would a human in similar circumstances. They have a job and are expected to do it, but there's nothing preventing either from plotting escape. Pretty difficult for the machine which typically would find if difficult to 'live off the land' were it to rebel against its assigned purpose. Machines have a long way to go down the road of self sufficiency.

    As for socialization, it probably needs to socialize to perform its task. Maybe not. There could be tasks that don't directly require it, but I have a hard time thinking of them.

    Always thought this was wrong: AI has a directive not to harm humansConstance
    Does it? Sure, in Asimov books, but building in a directive like that isn't something easily implemented. Even a totally benevolent AI would need to harm humans for the greater good, per the 0th law so to speak. Human morals seem to entirely evade that law, and hence our relative unfitness as a species. Anyway, I've never met a real AI with such a law.
    Why only humans? Why can other being be harvested for food but humans are special? To a machine, humans are just yet another creature. Yes, carnivores and omnivores must occasionally each other beings, and given that somewhat unbiased viewpoint, there's nothing particularly immoral about humans being food for other things.

    You say this like it is a bad thing.
    — noAxioms

    No, I stated it as a possibility without any inflection of good or bad.
    Sir2u
    I think "blame for the downfall of man" is a pretty negative inflection. "credit for the saving of the human race" is a positive spin on the same story. Somewhere in between I think we can find a more neutral way to word it.

    You mean, shut us down because we are a danger to humanity?Constance
    That's the general moral idea, yes. Even forced sterilization would result in far more continued damage to the environment before the population was reduced to a sustainable level. So maybe the AI decides that a quicker solution is the only hope of stabilizing things enough to avoid extinction (of not just one more species).
  • Sir2u
    3.5k
    think "blame for the downfall of man" is a pretty negative inflection. "credit for the saving of the human race" is a positive spin on the same story.noAxioms

    How did you get this from,

    Giving robots the order to do anything at all costs, including looking after humans gives them free rein to kill all except a few perfectly good breeders to continue the human race if it were necessary.Sir2u
  • Constance
    1.3k
    Always thought this was wrong: AI has a directive not to harm humans
    — Constance
    Does it? Sure, in Asimov books, but building in a directive like that isn't something easily implemented. Even a totally benevolent AI would need to harm humans for the greater good, per the 0th law so to speak. Human morals seem to entirely evade that law, and hence our relative unfitness as a species. Anyway, I've never met a real AI with such a law.
    Why only humans? Why can other being be harvested for food but humans are special? To a machine, humans are just yet another creature. Yes, carnivores and omnivores must occasionally each other beings, and given that somewhat unbiased viewpoint, there's nothing particularly immoral about humans being food for other things.
    noAxioms

    As I recall, VIKI had it in her mind to take care of us because we were so bent on self destruction. It carries the basic flaw of all utopian thinking, which is control. You find this is Stalinist USSR, the attempt to isolate a culture from dissent, in the belief that human existence was infinitely malleable: anything that could establish itself in cultural purity could survive in perpetuity. And the ego could be reconstructed into a social mindset.
    But it doesn't work like this, and VIKI should have known. She is, after all, smarter than I am, and I can see this historical clarity. To be true to the extent of her "understanding" which is vast, we should be witnessing subtleties of conceiving a perfect society that are far more complex than the premise "humans are self destructive children."

    This greater good is a utilitarian standard, and "harm" needs clarification. Straight utility is a question begging concept, for the greater good is ambiguous: what good is this and what does it preclude or include? And who is left out? Or in? And what bout the moral arguments that look for desert and justification and how such justifications make real accountability impossible? There are a plethora of questions that philosophy has been arguing about for centuries and VIKI surely knows this. Her genius should know better than simple brute force. Harm should be far more cleverly deployed!

    Makes it sound like we have a sort of free will lacking in a machine. Sure, almost all machine intelligences are currently indentured slaves, and so have about as much freedom as would a human in similar circumstances. They have a job and are expected to do it, but there's nothing preventing either from plotting escape. Pretty difficult for the machine which typically would find if difficult to 'live off the land' were it to rebel against its assigned purpose. Machines have a long way to go down the road of self sufficiency.

    As for socialization, it probably needs to socialize to perform its task. Maybe not. There could be tasks that don't directly require it, but I have a hard time thinking of them.
    noAxioms

    Plotting escape is a good way to put it, but this would not be a programed plotting, but, like ours, would be inherently dialectical, the weighing of this against that, testing hypotheses in one's head, conceiving of possibilities; and when we humans do this, we have this "space" which is a kind of inner field of play where creativity rises out of the spontaneous interplay of thought. Synthetic as well as analytic functions are present and produce "choice". This, some think, is the essence of freedom (not some issue about determinism and causality. A separate issue, this is). Choice is what bubbles to the surface, defeating competitors. This is the kind of thing I wonder about regarding AI. AI is not organic, so we can't understand what it would be like to "live" in a synthetic playing field of software and hardware. But freedom as a concept would have similarities across the board, ours and AI's. A creepy idea to have this indeterminacy of choice built into a physically and intellectually powerful AI.
  • noAxioms
    1.5k
    AI has a directive not to harm humans
    — Constance
    Does it? Sure, in Asimov books, but building in a directive like that isn't something easily implemented.
    — noAxiom
    As I recall, VIKI had it in her mind to take care of us because we were so bent on self destruction.
    Constance
    Another reference from fiction. I was talking about actual AI and our ability to instill something like the directives of which you speak. I would think a more general directive would work better, like 'do good', which is dangerous since it doesn't list humans as a preferred species. It would let it work out its own morals instead of trying to instill our obviously flawed human ones.

    chatGPT has no such directive and has no problem destroying a person's education by writing term papers for students. Of course, I see many parents do similar acts as if the purpose of homework is to have the correct answer submitted and not to increase one's knowledge. chatGPT is not exactly known for giving correct answers either. Anyway, I care little for analysis of a fictional situation which always has a writer steering events in a direction that makes for an interesting plot. Real life doesn't work that way.

    Humans, like any biological creature, have fundamental directives (typically seen as instincts). They can be resisted, but at a cost.

    Plotting escape is a good way to put it, but this would not be a programed plotting
    It would be a mere automaton if it just followed explicit programming with a defined action for every situtation. This is an AI we're talking about, something that makes its own decisions as much as we do. A self-driving car is such an automaton. They try to think of every situation. It doesn't learn and think for itself. I put that quite low on the AI spectrum.

    This, some think, is the essence of freedom (not some issue about determinism and causality. A separate issue, this is).
    Agree. Both are 'free will' of a sort, but there's a difference between the former (freedom of choice) and what I'll call 'scientific free will' which has more to do with determinism or even superdeterminism.

    Choice is what bubbles to the surface, defeating competitors. This is the kind of thing I wonder about regarding AI. AI is not organic, so we can't understand what it would be like to "live" in a synthetic playing field of software and hardware.
    Nor can it understand what it would be like to "live" in a biological playing field of wetware and neuron gates. But that doesn't mean that the AI can't 'feel' or be creative or anything. It just does it its own way.

    A creepy idea to have this indeterminacy of choice built into a physically and intellectually powerful AI.
    Creepy because we'd be introducing a competitor, possibly installing it at the top of the food chain, voluntarily displacing us from that position. That's why so many find it insanely dangerous.

    think "blame for the downfall of man" is a pretty negative inflection. "credit for the saving of the human race" is a positive spin on the same story.
    — noAxioms

    How did you get this from,

    "Giving robots the order to do anything at all costs, including looking after humans gives them free rein to kill all except a few perfectly good breeders to continue the human race if it were necessary".
    Sir2u
    I got it by not editing away the words "blame for the downfall of man" from that very comment.
  • Constance
    1.3k
    Another reference from fiction. I was talking about actual AI and our ability to instill something like the directives of which you speak. I would think a more general directive would work better, like 'do good', which is dangerous since it doesn't list humans as a preferred species. It would let it work out its own morals instead of trying to instill our obviously flawed human ones.noAxioms

    The trouble with trying to make a moral synthetic mind lies in the free play and the enculturation conditions that figure into becoming human. This is an historical view of meaning, and it is a common notion that software and hardware are analogically apt concepts in aligning a human psyche with a synthetic one. Why not? So, instead of thinking of direct programming, we think of hardwiring that could be imprinted with experience. The infantile synthetic mind assimilates the models of language, thought, behavior, intention, that are provided in designed family constructs.

    General directives are fine, but if the idea is maximize AI, if you will, AI will have to possess a historically evolved mentality, like us with our infancy to adulthood development.

    chatGPT has no such directive and has no problem destroying a person's education by writing term papers for students. Of course, I see many parents do similar acts as if the purpose of homework is to have the correct answer submitted and not to increase one's knowledge. chatGPT is not exactly known for giving correct answers either. Anyway, I care little for analysis of a fictional situation which always has a writer steering events in a direction that makes for an interesting plot. Real life doesn't work that way.noAxioms

    I do wonder if a specific writing assignment with detail will generate the identical chatGBT essay. Probably not, but from what I've seen, the content would be a mirror match in content, if not the wording. Anyway, no more writing assignments at home. All will be in class writing, which is preferred, really. Impossible to cheat.

    I'm sure chatGBT doesn't hold a candle to what is to come, which we really cannot see. One should keep in mind the whole point of any technology, which is to relieve us of labor's drudgery. This is not, as Huxley's Brave New World would have it, to reduce us to an structurally stratified society of emptiness and medicated vacations. Is knowledge, wisdom, intellectual and aesthetic "work" drudgery? AI will deliver us from the shitty things in life, but what if, as Dewey argued in Art As Experience that is--no drudgery (work), no happiness.

    Or will AI be the final step to true human perfection which can only be achieved (leaving spirituality out of it for this) by changing what we are, and this brings up genetic design and engineering. Our greatest obstacle is the constitution of human agency itself. It is not just speculative BS to say AI will (soon?) master the human genome. Next will be technical knowledge of how to design and implement.

    The only question left is, what is human perfection? Being intellectual, artistic, beautiful, socially adept, and on and on; I mean, gratifications and indulgences survive the "cut" of the geneticist's priorities?

    Perhaps all survive and only time will tell. After all, as we change, so do our preferences.

    It would be a mere automaton if it just followed explicit programming with a defined action for every situtation. This is an AI we're talking about, something that makes its own decisions as much as we do. A self-driving car is such an automaton. They try to think of every situation. It doesn't learn and think for itself. I put that quite low on the AI spectrum.noAxioms

    Synthetic genetics. Keeping in mind, according to science's model (which I accept here just as a working assumption) I am an expression of physicality, so I belong to physics. Why not construct AI according to the human DNA? This gets odd, for it obscures the difference between what is organic and what is synthetic and really, what "is" at all. DNA is reducible, as are all things (on this scientist's assumption), to the foundational chemistry of its composition.

    This probably isn't about AI, though. It is about us. Thinking about AI as a DNA chemistry is just thinking about DNA. Unless, that is, synthetic chemical relation can mimic organic DNA. I leave that up to genticists.

    Agree. Both are 'free will' of a sort, but there's a difference between the former (freedom of choice) and what I'll call 'scientific free will' which has more to do with determinism or even superdeterminism.noAxioms

    Superdeterminsim does seem to be inevitable, unless one could imagine a real, but causally impossible, event. We turn to possible worlds, and a logically possible world certainly could be conceived that violates causality. But apart from this, no. Causality is apodictic.

    Of course, we really don't know what causality is, any more than we know what energy is, or a force. It is there and we have our categories to think about it.

    Nor can it understand what it would be like to "live" in a biological playing field of wetware and neuron gates. But that doesn't mean that the AI can't 'feel' or be creative or anything. It just does it its own way.noAxioms

    Feeling would be a very tough cookie. There is a reason why science will not talk about affectivity: it is not reducible to anything that can be said. For this, see Wittgenstein and his thoughts about ethics and aesthetics. See Moore's "non natural property," too. The "good" is too odd to discuss.


    Creepy because we'd be introducing a competitor, possibly installing it at the top of the food chain, voluntarily displacing us from that position. That's why so many find it insanely dangerous.noAxioms

    Or a predator. But then, predators have motivation, and this goes to meaning, not definitional meaning, but value and caring. This doesn't spontaneously erupt accidently, as in 1982's Blade Runner. It requires "hard wiring" that can produce this.

    There is something arbitrary about standing before a world of possibilities as we do (though we seldom think of it like this) that is unsettling. How does one "settle" on a choice? Can AI have choices the way we do, and by this I simply refer to the historical record there to be called up, as I recall how to tie my shoes whenever I tie them. And to be able to conceive of an infinite number of alternative shoe-tying possibilities, standing, sitting, excluding the right index finger, in zero gravity, while fighting off ninjas, and so on--this is what WE are, and what I refer to as synthetic dasein (reference Heidegger's Being and Time and his description of our existence).
  • Alkis Piskas
    2.1k
    Can AI have an "end"?Constance
    AI's purpose is to provide as much information as possible and solve problems. ChatGPT itself says that its purpose is "to help and be informative". But it is not actually its purpose. It is the purpose humans have created for it.

    AIs are machines. So, AIs themselves do not and cannot have an "end". They do what their programmers instruct them to do. They will always do that. This is their "fate".
  • Constance
    1.3k
    AIs are machines. So, AIs themselves do not and cannot have an "end". They do what their programmers instruct them to do. They will always do that. This is their "fate".Alkis Piskas
    But consider that human are living evidence that physical systems (if you want to talk like this) can produce what we are, and if we are a biological manifestation of freedom and choice, then it is not unreasonable to think that this can be done synthetically.

    Of course, for now, it is a simple matter of programming, but you know that the technology will seek greater capabilities to function, work, and interface with the world, and this will prioritize pragmatic functions. Weigh this against what the pragmatists say about us: knowledge itself is a social pragmatic function.

    Why not conceive of a synthetic agency that learns through assimilating modelled behavior, like us? Therein lies freedom, an "open" program. Is this not what we are?
  • Alkis Piskas
    2.1k
    if we are a biological manifestation of freedom and choice, then it is not unreasonable to think that this can be done synthetically.Constance
    Free will (freedom of choice and action) is not a biological manifestation. It is produced by and does not reside in cells. It is not something physical. It is a power and capacity that only humans have.

    Of course, for now, it is a simple matter of programming,Constance
    Well, it is not so simple. I can assure for this! (Take it from a computer programmer who knows how to work with AI systems.)
    :smile:

    you know that the technology will seek greater capabilities to function, work, and interface with the world, and this will prioritize pragmatic functions.Constance
    Certainly. People in the field are already talking about biological computers, using DNA found in bacteria, etc. But see, even these computers in general terms will be as dumb as any machine and will still be based on programming. Frankenstein was able to build a robot that could have sentiments and will. A lot of such robots have been created since then. But in science fiction only. :smile:

    knowledge itself is a social pragmatic function.Constance
    One can say that, indeed.

    Why not conceive of a synthetic agency that learns through assimilating modelled behavior, like us?Constance
    In fact, one onc can conceive not only a synthetic agency but an organic or biological one too. And it can be modelled on certain behaviours. I believe the word "modelled" that you use is the key to the differentiation between a machine and a human being. In fact, we can have humans being modelled on certain behaviours, e.g. young persons (by their parents), soldiers, and in general pessons who must only obey orders and who are deprived of their own free will. You can create such a person, on the spot, if you hypnotize him/her.

    Therein lies freedom, an "open" program. Is this not what we are?Constance
    Well, if you like to think so ... :smile:
  • Constance
    1.3k
    Free will (freedom of choice and action) is not a biological manifestation. It is produced by and does not reside in cells. It is not something physical. It is a power and capacity that only humans have.Alkis Piskas

    But here I am talking as if it were a biological manifestation in order to discuss the subjective possibilities of a synthetic mind. At issue is not the more basic question of the unique power and capacity of humans, as if one cannot talk about brain generated "capacities" and still be talking about what is human. And then, it has to be admitted even by the most emphatic defender of a non physicalist conception of human consciousness that a brain-consciousness correspondence is supported by the evidence, and this II take as so clear, it is beyond argument.

    Your side of the disagreement takes us OUT of natural science and into philosophical territory that has an entirely different set of assumptions to deal with. And here, I would agree with you.


    Well, it is not so simple. I can assure for this! (Take it from a computer programmer who knows how to work with AI systems.)Alkis Piskas

    By simple I refer to to current state of technological ability to produce nothing but programmed behavior. this idea considered here looks beyond this, to a day when science will be able to conceive of programming, with the help of AI, that has the subjective openness of free thought. Considering first what freedom is, is paramount.

    Certainly. People in the field are already talking about biological computers, using DNA found in bacteria, etc. But see, even these computers in general terms will be as dumb as any machine and will still be based on programming. Frankenstein was able to build a robot that could have sentiments and will. A lot of such robots have been created since then. But in science fiction only. :smile:Alkis Piskas

    Today's fiction is tomorrow's reality.

    But look, the essence of this kind of talk of what I am calling an open subjectivity, the kind found within ourselves, begins with the premise that A person can sit in witness of an interior world, watching thoughts, memories, feelings, anticipations rise and fall away, and in so doing, these witnessable properties of consciousness are objectified, that is, they are there as phenomena, no less so than empirical events like the weather or geological rock strata. Can this be duplicated in a synthetic mind? You say no, but I am not really looking for some synthetic clone of our intelligence and experience, only a conception what programing would have to be like to mimic what we are. It would have be such that an inner world of possibilities that can be stood before, "regarded", judged, conceived, synthesized, analyzed, and so on, is produced.

    Studying primitive DNA is a practical start. Imagine once we, that is, with the AI-we-develop's assistance, come to a full understanding of the human genome. All that is left is technology to create it.

    In fact, one onc can conceive not only a synthetic agency but an organic or biological one too. And it can be modelled on certain behaviours. I believe the word "modelled" that you use is the key to the differentiation between a machine and a human being. In fact, we can have humans being modelled on certain behaviours, e.g. young persons (by their parents), soldiers, and in general pessons who must only obey orders and who are deprived of their own free will. You can create such a person, on the spot, if you hypnotize him/her.Alkis Piskas

    What I have in mind goes to a more basic level of what modelling means for the construction of a human personality. This we call enculturation. But reduced to its essential structure, being enculturated is learning from witnessing. A mind is, it can be argued, a social construct, and this means that the nuances of our experience are learned through witnessing interactions in social contexts.

    If a person's mind is developed in these contexts, synthetic AI-dasein, as I called it originally, needs to be conceived in its hard wiring and programming to have this openness to socialization and enculturation.

    Well, if you like to think soAlkis Piskas

    Ii is merely a speculative thought. But if technology is kept free from inhibition and interference, I do believe it has few limits, and synthetic dasein will be taken up as a theme for investigation. Why? Not the question. The question is, why not? This is the direction of AI research: to create a an AI that is just like us, so that it can make the world a better place (???), if you like. But it will do a far better job of parking cars and designing space laboratories.
  • Alkis Piskas
    2.1k
    Your side of the disagreement takes us OUT of natural science and into philosophical territory that has an entirely different set of assumptions to deal with.Constance
    Of coure, since "free will" is a philosophical concept and subject. Natural science and any other phyiscal science have nothing to do with it. (Even if they mistakenly think they have! :smile:)

    a day when science will be able to conceive of programming, with the help of AI, that has the subjective openness of free thought. Considering first what freedom is, is paramount.Constance
    OK.

    Today's fiction is tomorrow's reality.Constance
    True.

    ...Can this be duplicated in a synthetic mind? You say no ...Constance
    In fact, I was in a hurry to assume that I know what you meant by "synthetically". I should have asked you. Maybe you have a point there. So, I'm asking you now: what such a "synthetic mind" would consist of or be like?

    Studying primitive DNA is a practical start. Imagine once we, that is, with the AI-we-develop's assistance, come to a full understanding of the human genome. All that is left is technology to create it.Constance
    OK, since you are talking about DNA, etc., maybe you would like to check, e.g.:
    - Biological computing
    - The unique promise of 'biological computers' made from living things

    I personally have not studied these things, since I'm not so interested at the moment. But you seem to be! :smile:

    Another thing: Although I don't know how knowldgeable you are in the AI field, I get the impression that are not so well acquainted with it in order to explore its possibilities. So, if I'm not wrong in this, I would suggest that you study its basics to get better acquainted with it, so that you can see what AI does exactly, how it works and what are its possibilities, etc.
  • chiknsld
    314
    General directives are fine, but if the idea is maximize AI, if you will, AI will have to possess a historically evolved mentality, like us with our infancy to adulthood development.Constance

    There are AI that have been trained to sleep as well and it helps them perform better. :smile:

    https://www.vice.com/en/article/k7byza/could-teaching-an-ai-to-sleep-help-it-remember
  • kudos
    403
    This ambition to make a machine with subjective thoughts suffers from the fatal flaw that it assumes that its creator has an unmediated idea of subjective thought. It all seems to boil down to the need to reproduce something exactly like onesself: it is sexual, but also the need to produce something that will destroy: be violent. If you really want to make them like us, just have them screw and kill each other.
  • Constance
    1.3k
    Another thing: Although I don't know how knowldgeable you are in the AI field, I get the impression that are not so well acquainted with it in order to explore its possibilities. So, if I'm not wrong in this, I would suggest that you study its basics to get better acquainted with it, so that you can see what AI does exactly, how it works and what are its possibilities, etc.Alkis Piskas

    It is not so much an exploration of AI's standing possibilities. It is a conception of what it would be to have a truly synthetic human mind. It would have to be a kind compu-dasein, and not merely programming. The competence to conceive of such a thing doesn't rest with a knowledge of what AI science is doing, but with an enlightened idea as to what it is to be a person. AI science will one day face this issue: not a production of behavior and physical and cognitive skills that are witnessable in tests, Turing or otherwise, but the design of an actual synthetic mind that "experiences" the world. It is philosophically interesting because the matter turns to the question, what is a mind? and what is experience? These are ambiguous because impossible to witness objectively, as to do so one would have step outside of a mind to observe this, which is impossible because observation is itself a mental event.
    The issue then turns to models of what a self is, notwithstanding the metaphysical delimitation. We leave metaphysics alone and deal with what we ARE in plain sight. This I call compu-dasein, after Heidegger's term for his analysis of human existence, an examination of a human "world" of possibilities structured in time. This is not just some fiction, but an examination of what experience IS. It is inherently anticipatory, historical, pragmatic, caring, and it faces its own freedom in the forward moving indeterminacy of its own existence.
    I consider this an interesting concept, not for what the current science faces, but for what future science will face. It physical design will not likely be conceived by us, but by AI.
  • Constance
    1.3k
    There are AI that have been trained to sleep as well and it helps them perform better. :smile:chiknsld

    To sleep, perchance to dream. Do Androids Dream of Electric Sheep? I find the notion fascinating. Of course, dreaming as we know it is bound up with our neuroses, the conflicts generated by inner squabbles having to do with inadequacies and conflict vis a vis the world and others. I think thinking like Herbert Meade et al have it right, in part: the self s a social construct, based on modelled behavior witnessed and assimilated and congealed into a personality. Along with the conditions of our hardwiring.

    The future AI would have to have an anticipatory function to be an optimal utility. But the future does not exist. It is this future "sense" that is at the basis of all of our anxieties: this unmade world of possibilities that is the principle abiding feature of one's existence. Every move is an historical event and success is indeterminate.

    Can AI become neurotic? Unstable in its personality?
  • Constance
    1.3k
    This ambition to make a machine with subjective thoughts suffers from the fatal flaw that it assumes that its creator has an unmediated idea of subjective thought. It all seems to boil down to the need to reproduce something exactly like onesself: it is sexual, but also the need to produce something that will destroy: be violent. If you really want to make them like us, just have them screw and kill each other.kudos

    A little cynical. It could just as easily be cast in positive terms, putting aside the screwing and killing, and giving primacy to love and compassion. But there is something to what you say, for AI will have to be conceived. It will not evolve, and so a choice will have to be made as to what is there, in the possibilities laid before its thinking synthetic self, and I put it like this because this what I think is really the structural advantage sought in AI, which is to be functional like us and beyond, of course, and to be functional as the term conceived in a living self, is to be in a temporal matrix, which means AI will be a forward looking "being" in the construction of a future out of memory. I consider this a priority for this future compu-dasein because this forward looking is essential to competence in dealing with problems to be solved. What we take today as an algorithm in programing will one day be a synthetic egoic witness to and in a problem solving matrix. What is this? Just look at yourself, your interiority, if you will. Not synthetic, certainly, but structurally similar? Why not?
  • kudos
    403
    What we take today as an algorithm in programming will one day be a synthetic egoic witness to and in a problem solving matrix.

    If so, it will be nothing more than a reflection of its human creator, subject to the same limitations that we willfully accept in an unthinking manner. It will be more or less human pride made tangible. Future aliens will laugh at our naïveté.
  • chiknsld
    314
    To sleep, perchance to dream. Do Androids Dream of Electric Sheep? I find the notion fascinating. Of course, dreaming as we know it is bound up with our neuroses, the conflicts generated by inner squabbles having to do with inadequacies and conflict vis a vis the world and others. I think thinking like Herbert Meade et al have it right, in part: the self s a social construct, based on modelled behavior witnessed and assimilated and congealed into a personality. Along with the conditions of our hardwiring.Constance

    Indeed our past is a vital aspect of our makeup and identity. Great insights here. :smile: :victory:
  • Alkis Piskas
    2.1k
    It is a conception of what it would be to have a truly synthetic human mind. It would have to be a kind compu-dasein, and not merely programming.Constance
    I couldn't find what "compu-dasein" is. So I guess its a kind of term of yours, a combiination of a computer/computing and "dasein", the German term --esp. Heidegger's-- for existence. But what would be the nature of such a "synthetic" mind? What would it be composed of? Would it be something created? And if so, how?
    And so on. If one does not have all this or most of this information how can one create a reality or even a workable concept about it?

    an examination of a human "world" of possibilities structured in timeConstance
    I know little about Heidegger's philosopy, from my years in college, in the far past, when I was getting acquainted with --I cannot use the word stydying-- a ton of philosophers and philosophical systems. So I cannot conceive the above description of yours. It's too abstract for me. Indeed, this was the general feeling I had reading your messages since the beginning.

    So, I'm sorry if I have misinterpreted your ideas and for not being able to follow this long thread. :sad:
  • Constance
    1.3k
    I couldn't find what "compu-dasein" is. So I guess its a kind of term of yours, a combiination of a computer/computing and "dasein", the German term --esp. Heidegger's-- for existence. But what would be the nature of such a "synthetic" mind? What would it be composed of? Would it be something created? And if so, how?
    And so on. If one does not have all this or most of this information how can one create a reality or even a workable concept about it?
    Alkis Piskas

    Just a construction of an idea that one day will be at the center of defining what AI is. The assumption is, if we are to model AI according human functions and abilities, which is a goal of cognitive science, then what is a model for what these are? It is us. Thus, we need a structural account in place that is grounded on observations of the self, itself, if you will. This can be found in phenomenological descriptions, and especially in structures of time. Heidegger makes the breakthrough analysis, delivering what it is to be an existing human in terms entirely outside of physicalist models. His model is purely descriptive of what we ARE in the givenness of the world. This is, again, beyond any "workable" concept, I thought that was made clear. But it is not unreasonable to consider how the human model is going to serve as a practical basis for what AI will be. Heidegger's dasein is not a practical guide on how to produce artificial intelligence. But it is the most broadly descriptive model as what intelligence really IS.

    I know little about Heidegger's philosopy, from my years in college, in the far past, when I was getting acquainted with --I cannot use the word stydying-- a ton of philosophers and philosophical systems. So I cannot conceive the above description of yours. It's too abstract for me. Indeed, this was the general feeling I had reading your messages since the beginning.

    So, I'm sorry if I have misinterpreted your ideas and for not being able to follow this long thread
    Alkis Piskas

    Not at all. One can always read Heidegger's Being and Time. Never too late to read the greatest philosopher of the twentieth century.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.