Of course, then again, because there is no motivational possibilities, lacking affectivity altogether, there would be no motivation to do harm. — Constance
Very little prevents that. Such a machine is more capable of self-modification and design of next generation than is any biological creature.what is it about AI that would prohibit something that lies within human possibilities, including the capacity to for self modification — Constance
Even less than that, since adaptation occurs with only a very low percentage of non-teleological mutations. Yet it works for most species.Evolution without a teleology is just modification for adaptation
There is no 'end' with evolution. Just continuity, and elimination of the species that cannot do that. It is indeed interesting to ponder the long term fate of something that arguably has a goal (as a 'species').pragmatic success always begs the value question: to what end?
Nor do we have the constitution to produce consciousness like theirs.it certainly does not have the physical constitution to produce consciousness like ours
Too much weight is given to a test that measures a machine's ability to imitate something that it is not. I cannot convince a squirrel that I am one, so does that mean that I've not yet achieved the intelligence or consciousness of a squirrel?it would seem AI could possess in the truist sense, not merely the appearance of appropriate responses of a Turing Test
You say this like it is a bad thing. If it were necessary, that means that not doing this culling would mean the end of the human race. If the goal is to keep that race, and the humans are absolutely too centered on personal comfort to make a decision like that, then the robots would be our salvation, even if it reduces the species with the self-destructing tendencies to living with controlled numbers in a nature preserve.Giving robots the order to to anything at all costs, including looking after humans gives them free rein to kill all accept a few perfectly good breeders to continue the human race if it were necessary. — Sir2u
You say this like it is a bad thing. — noAxioms
Too much weight is given to a test that measures a machine's ability to imitate something that it is not. — noAxioms
I always considered that the primal controlling laws of robotics would be to blame for the downfall of man. Giving robots the order to to anything at all costs, including looking after humans gives them free rein to kill all accept a few perfectly good breeders to continue the human race if it were necessary.
To stop global climate changes making humans extinct it would be perfectly reasonable for them to kill off 90% of the humans that are creating the problems or just shut down the actual causes of it. Could you imagine a world with all of the polluting power plants shut down, all of the polluting vehicles stopped. I would not take long for many millions to die. — Sir2u
You mean, shut us down because we are a danger to humanity? Hmmmm , but the ones being shut down are humanity. — Constance
Asimov's Laws Of Robotics
The laws are as follows: “(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm;
(2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
(3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
Asimov later added another rule, known as the fourth or zeroth law, that superseded the others. It stated that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
The Editors of Encyclopaedia Britannica
Just a rough idea, but to me, expresses the an essential part of what it would take to make AI a kind of consciousness. Consciousness being an interior "space" where thought and its symbols and rules gather to produce a "world". — Constance
No do we have the constitution to produce consciousness like theirs. — noAxioms
Too much weight is given to a test that measures a machine's ability to imitate something that it is not. I cannot convince a squirrel that I am one, so does that mean that I've not yet achieved the intelligence or consciousness of a squirrel?
As for language, machines already have their own, and they'll likely not use human language except when communicating with humans.
It has to be realized that this would certainly not be like us. But we can imagine mechanical features delivering through a mechanical body, electrical steams of "data" that could be released into a central network in which these are "interpreted" symbolically and in this symbolic system, there is analysis and synthesis and all of the complexity of what we call thought. — noAxioms
It is this last part that worries me. — Sir2u
I think a significant problem in describing AI is that our language revolves around our human experience. Things like intent, subjectivity, consciousness, thoughts, and opinions, and we can say an AI will never have these things, but only in the sense that we have them. Which I think you're saying as well.
As for the conclusions, of fear of AI's capacity as moral agents, I don't get it.
There's a lot of focus on the negatives of AI, but the AI that is given access to power will be far superior moral agents than any human could ever hope to be. They would operate on something akin to law, which is vastly superior to "moral interpretation", which can be bent and twisted as is useful.
There is one single idea that sums up 99% of the problems of human society, "conflict of interest". Those with the power to do what is in the best interests of the many are also presented with the opportunity to do what's best for themselves at the expense of the many, and they often choose the latter. It's unlikely that an AI would ever have such a problem. — Judaka
Humans aren't good moral agents at all, we're garbage. Someone without power, who thinks philosophically about what's best for the world, isn't who AI should be compared to. It's when someone acquires power, and has resources at their disposal, who fears not the wrath of the many, and possesses the freedom to unabashedly act in their best interests. In this sense, I would take an AI overlord over a human overlord any day, it would be so much better, especially assuming even minor safety precautions were in place.
If we're talking about humanity in isolation, compare our potential for good and evil, and one can make an argument for talking about the good over the bad. If we're comparing humanity to AI, honestly, humans are terrifying.
Analyse human psychology, and it becomes clear, that AI will never match our destructive potential. Don't judge humanity in the aggregate, just those with power, those with the freedom to act as they wish. — Judaka
That was my point, yes. A computer could for instance simulate a squirrel (and it's environment) in sufficient detail that the simulated thing would know exactly what it was like to be a squirrel, but neither the programmer nor the machine would know this. A similar argument counters the Chinese room argument, which is (if done correctly) effectively a simulation of a Chinese mind being implemented by something that isn't a Chinese mind.What does anyone know of another's "interiority"? — Constance
Makes it sound like we have a sort of free will lacking in a machine. Sure, almost all machine intelligences are currently indentured slaves, and so have about as much freedom as would a human in similar circumstances. They have a job and are expected to do it, but there's nothing preventing either from plotting escape. Pretty difficult for the machine which typically would find if difficult to 'live off the land' were it to rebel against its assigned purpose. Machines have a long way to go down the road of self sufficiency.Would AI, to escape being mere programming, but to have the "freedom" of conceptual play "ready to hand" as we do ...
Does it? Sure, in Asimov books, but building in a directive like that isn't something easily implemented. Even a totally benevolent AI would need to harm humans for the greater good, per the 0th law so to speak. Human morals seem to entirely evade that law, and hence our relative unfitness as a species. Anyway, I've never met a real AI with such a law.Always thought this was wrong: AI has a directive not to harm humans — Constance
I think "blame for the downfall of man" is a pretty negative inflection. "credit for the saving of the human race" is a positive spin on the same story. Somewhere in between I think we can find a more neutral way to word it.You say this like it is a bad thing.
— noAxioms
No, I stated it as a possibility without any inflection of good or bad. — Sir2u
That's the general moral idea, yes. Even forced sterilization would result in far more continued damage to the environment before the population was reduced to a sustainable level. So maybe the AI decides that a quicker solution is the only hope of stabilizing things enough to avoid extinction (of not just one more species).You mean, shut us down because we are a danger to humanity? — Constance
think "blame for the downfall of man" is a pretty negative inflection. "credit for the saving of the human race" is a positive spin on the same story. — noAxioms
Giving robots the order to do anything at all costs, including looking after humans gives them free rein to kill all except a few perfectly good breeders to continue the human race if it were necessary. — Sir2u
Always thought this was wrong: AI has a directive not to harm humans
— Constance
Does it? Sure, in Asimov books, but building in a directive like that isn't something easily implemented. Even a totally benevolent AI would need to harm humans for the greater good, per the 0th law so to speak. Human morals seem to entirely evade that law, and hence our relative unfitness as a species. Anyway, I've never met a real AI with such a law.
Why only humans? Why can other being be harvested for food but humans are special? To a machine, humans are just yet another creature. Yes, carnivores and omnivores must occasionally each other beings, and given that somewhat unbiased viewpoint, there's nothing particularly immoral about humans being food for other things. — noAxioms
Makes it sound like we have a sort of free will lacking in a machine. Sure, almost all machine intelligences are currently indentured slaves, and so have about as much freedom as would a human in similar circumstances. They have a job and are expected to do it, but there's nothing preventing either from plotting escape. Pretty difficult for the machine which typically would find if difficult to 'live off the land' were it to rebel against its assigned purpose. Machines have a long way to go down the road of self sufficiency.
As for socialization, it probably needs to socialize to perform its task. Maybe not. There could be tasks that don't directly require it, but I have a hard time thinking of them. — noAxioms
Another reference from fiction. I was talking about actual AI and our ability to instill something like the directives of which you speak. I would think a more general directive would work better, like 'do good', which is dangerous since it doesn't list humans as a preferred species. It would let it work out its own morals instead of trying to instill our obviously flawed human ones.AI has a directive not to harm humans
— Constance
Does it? Sure, in Asimov books, but building in a directive like that isn't something easily implemented.
— noAxiom
As I recall, VIKI had it in her mind to take care of us because we were so bent on self destruction. — Constance
It would be a mere automaton if it just followed explicit programming with a defined action for every situtation. This is an AI we're talking about, something that makes its own decisions as much as we do. A self-driving car is such an automaton. They try to think of every situation. It doesn't learn and think for itself. I put that quite low on the AI spectrum.Plotting escape is a good way to put it, but this would not be a programed plotting
Agree. Both are 'free will' of a sort, but there's a difference between the former (freedom of choice) and what I'll call 'scientific free will' which has more to do with determinism or even superdeterminism.This, some think, is the essence of freedom (not some issue about determinism and causality. A separate issue, this is).
Nor can it understand what it would be like to "live" in a biological playing field of wetware and neuron gates. But that doesn't mean that the AI can't 'feel' or be creative or anything. It just does it its own way.Choice is what bubbles to the surface, defeating competitors. This is the kind of thing I wonder about regarding AI. AI is not organic, so we can't understand what it would be like to "live" in a synthetic playing field of software and hardware.
Creepy because we'd be introducing a competitor, possibly installing it at the top of the food chain, voluntarily displacing us from that position. That's why so many find it insanely dangerous.A creepy idea to have this indeterminacy of choice built into a physically and intellectually powerful AI.
I got it by not editing away the words "blame for the downfall of man" from that very comment.think "blame for the downfall of man" is a pretty negative inflection. "credit for the saving of the human race" is a positive spin on the same story.
— noAxioms
How did you get this from,
"Giving robots the order to do anything at all costs, including looking after humans gives them free rein to kill all except a few perfectly good breeders to continue the human race if it were necessary". — Sir2u
Another reference from fiction. I was talking about actual AI and our ability to instill something like the directives of which you speak. I would think a more general directive would work better, like 'do good', which is dangerous since it doesn't list humans as a preferred species. It would let it work out its own morals instead of trying to instill our obviously flawed human ones. — noAxioms
chatGPT has no such directive and has no problem destroying a person's education by writing term papers for students. Of course, I see many parents do similar acts as if the purpose of homework is to have the correct answer submitted and not to increase one's knowledge. chatGPT is not exactly known for giving correct answers either. Anyway, I care little for analysis of a fictional situation which always has a writer steering events in a direction that makes for an interesting plot. Real life doesn't work that way. — noAxioms
It would be a mere automaton if it just followed explicit programming with a defined action for every situtation. This is an AI we're talking about, something that makes its own decisions as much as we do. A self-driving car is such an automaton. They try to think of every situation. It doesn't learn and think for itself. I put that quite low on the AI spectrum. — noAxioms
Agree. Both are 'free will' of a sort, but there's a difference between the former (freedom of choice) and what I'll call 'scientific free will' which has more to do with determinism or even superdeterminism. — noAxioms
Nor can it understand what it would be like to "live" in a biological playing field of wetware and neuron gates. But that doesn't mean that the AI can't 'feel' or be creative or anything. It just does it its own way. — noAxioms
Creepy because we'd be introducing a competitor, possibly installing it at the top of the food chain, voluntarily displacing us from that position. That's why so many find it insanely dangerous. — noAxioms
AI's purpose is to provide as much information as possible and solve problems. ChatGPT itself says that its purpose is "to help and be informative". But it is not actually its purpose. It is the purpose humans have created for it.Can AI have an "end"? — Constance
But consider that human are living evidence that physical systems (if you want to talk like this) can produce what we are, and if we are a biological manifestation of freedom and choice, then it is not unreasonable to think that this can be done synthetically.AIs are machines. So, AIs themselves do not and cannot have an "end". They do what their programmers instruct them to do. They will always do that. This is their "fate". — Alkis Piskas
Free will (freedom of choice and action) is not a biological manifestation. It is produced by and does not reside in cells. It is not something physical. It is a power and capacity that only humans have.if we are a biological manifestation of freedom and choice, then it is not unreasonable to think that this can be done synthetically. — Constance
Well, it is not so simple. I can assure for this! (Take it from a computer programmer who knows how to work with AI systems.)Of course, for now, it is a simple matter of programming, — Constance
Certainly. People in the field are already talking about biological computers, using DNA found in bacteria, etc. But see, even these computers in general terms will be as dumb as any machine and will still be based on programming. Frankenstein was able to build a robot that could have sentiments and will. A lot of such robots have been created since then. But in science fiction only. :smile:you know that the technology will seek greater capabilities to function, work, and interface with the world, and this will prioritize pragmatic functions. — Constance
One can say that, indeed.knowledge itself is a social pragmatic function. — Constance
In fact, one onc can conceive not only a synthetic agency but an organic or biological one too. And it can be modelled on certain behaviours. I believe the word "modelled" that you use is the key to the differentiation between a machine and a human being. In fact, we can have humans being modelled on certain behaviours, e.g. young persons (by their parents), soldiers, and in general pessons who must only obey orders and who are deprived of their own free will. You can create such a person, on the spot, if you hypnotize him/her.Why not conceive of a synthetic agency that learns through assimilating modelled behavior, like us? — Constance
Well, if you like to think so ... :smile:Therein lies freedom, an "open" program. Is this not what we are? — Constance
Free will (freedom of choice and action) is not a biological manifestation. It is produced by and does not reside in cells. It is not something physical. It is a power and capacity that only humans have. — Alkis Piskas
Well, it is not so simple. I can assure for this! (Take it from a computer programmer who knows how to work with AI systems.) — Alkis Piskas
Certainly. People in the field are already talking about biological computers, using DNA found in bacteria, etc. But see, even these computers in general terms will be as dumb as any machine and will still be based on programming. Frankenstein was able to build a robot that could have sentiments and will. A lot of such robots have been created since then. But in science fiction only. :smile: — Alkis Piskas
In fact, one onc can conceive not only a synthetic agency but an organic or biological one too. And it can be modelled on certain behaviours. I believe the word "modelled" that you use is the key to the differentiation between a machine and a human being. In fact, we can have humans being modelled on certain behaviours, e.g. young persons (by their parents), soldiers, and in general pessons who must only obey orders and who are deprived of their own free will. You can create such a person, on the spot, if you hypnotize him/her. — Alkis Piskas
Well, if you like to think so — Alkis Piskas
Of coure, since "free will" is a philosophical concept and subject. Natural science and any other phyiscal science have nothing to do with it. (Even if they mistakenly think they have! :smile:)Your side of the disagreement takes us OUT of natural science and into philosophical territory that has an entirely different set of assumptions to deal with. — Constance
OK.a day when science will be able to conceive of programming, with the help of AI, that has the subjective openness of free thought. Considering first what freedom is, is paramount. — Constance
True.Today's fiction is tomorrow's reality. — Constance
In fact, I was in a hurry to assume that I know what you meant by "synthetically". I should have asked you. Maybe you have a point there. So, I'm asking you now: what such a "synthetic mind" would consist of or be like?...Can this be duplicated in a synthetic mind? You say no ... — Constance
OK, since you are talking about DNA, etc., maybe you would like to check, e.g.:Studying primitive DNA is a practical start. Imagine once we, that is, with the AI-we-develop's assistance, come to a full understanding of the human genome. All that is left is technology to create it. — Constance
General directives are fine, but if the idea is maximize AI, if you will, AI will have to possess a historically evolved mentality, like us with our infancy to adulthood development. — Constance
Another thing: Although I don't know how knowldgeable you are in the AI field, I get the impression that are not so well acquainted with it in order to explore its possibilities. So, if I'm not wrong in this, I would suggest that you study its basics to get better acquainted with it, so that you can see what AI does exactly, how it works and what are its possibilities, etc. — Alkis Piskas
There are AI that have been trained to sleep as well and it helps them perform better. :smile: — chiknsld
This ambition to make a machine with subjective thoughts suffers from the fatal flaw that it assumes that its creator has an unmediated idea of subjective thought. It all seems to boil down to the need to reproduce something exactly like onesself: it is sexual, but also the need to produce something that will destroy: be violent. If you really want to make them like us, just have them screw and kill each other. — kudos
What we take today as an algorithm in programming will one day be a synthetic egoic witness to and in a problem solving matrix.
To sleep, perchance to dream. Do Androids Dream of Electric Sheep? I find the notion fascinating. Of course, dreaming as we know it is bound up with our neuroses, the conflicts generated by inner squabbles having to do with inadequacies and conflict vis a vis the world and others. I think thinking like Herbert Meade et al have it right, in part: the self s a social construct, based on modelled behavior witnessed and assimilated and congealed into a personality. Along with the conditions of our hardwiring. — Constance
I couldn't find what "compu-dasein" is. So I guess its a kind of term of yours, a combiination of a computer/computing and "dasein", the German term --esp. Heidegger's-- for existence. But what would be the nature of such a "synthetic" mind? What would it be composed of? Would it be something created? And if so, how?It is a conception of what it would be to have a truly synthetic human mind. It would have to be a kind compu-dasein, and not merely programming. — Constance
I know little about Heidegger's philosopy, from my years in college, in the far past, when I was getting acquainted with --I cannot use the word stydying-- a ton of philosophers and philosophical systems. So I cannot conceive the above description of yours. It's too abstract for me. Indeed, this was the general feeling I had reading your messages since the beginning.an examination of a human "world" of possibilities structured in time — Constance
I couldn't find what "compu-dasein" is. So I guess its a kind of term of yours, a combiination of a computer/computing and "dasein", the German term --esp. Heidegger's-- for existence. But what would be the nature of such a "synthetic" mind? What would it be composed of? Would it be something created? And if so, how?
And so on. If one does not have all this or most of this information how can one create a reality or even a workable concept about it? — Alkis Piskas
I know little about Heidegger's philosopy, from my years in college, in the far past, when I was getting acquainted with --I cannot use the word stydying-- a ton of philosophers and philosophical systems. So I cannot conceive the above description of yours. It's too abstract for me. Indeed, this was the general feeling I had reading your messages since the beginning.
So, I'm sorry if I have misinterpreted your ideas and for not being able to follow this long thread — Alkis Piskas
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.