Loading [MathJax]/extensions/Safe.js
  • Harry Hindu
    5.2k
    From computer programming point of view, AI is just an overrated search engine.Corvus
    From a genetic point of view humans are just a baby-making (gene dispersal) engine.

    Put AI in a robot body with cameras to see, microphones to hear, tactile sensors for touch, chemical sensors to smell and taste and program it to learn from observing its own actions in the world (the same way you learned about the world when you were a toddler), could we then say AI (the robot) is intelligent?
  • Manuel
    4.2k
    What if we were to start with the idea that intelligence comes in degrees? Depending on how many properties of intelligence some thing exhibits, it possesses more or less intelligence.

    Is intelligence what you know or how you can apply what you know, or a bit of both? Is there a difference between intelligence and wisdom?
    Harry Hindu

    It may and probably does come in degrees. However, notice, that neither you nor I have defined what "intelligence" is. I think real life problem solving is a big part. And so is reasoning and giving reasons for something.

    But this probably overlooks a lot of aspects of intelligence, which I think are inherently nebulous. Otherwise, discussions like these wouldn't keep arising, since everything is clear. Wisdom? Something about it coming as we age, usually related to deep observations. Several other things, depending on who you ask.

    That's even more subjective than intelligence.

    So what else is missing if you are able to duplicate the function? Does it really matter what material is being used to perform the same function? Again, what makes a mass of neurons intelligent but a mass of silicon circuits not? What if engineers designed an artificial heart that lasts much longer and is structurally more sound than an organic one?Harry Hindu

    We can replace hearts and limbs. If function - whatever it is - is the main factor here, then aren't we done studying the heart or our limbs? I doubt we'd be satisfied by this answer, because we still have lots to discover about the heart and our limbs.

    And these things we are still studying say, how the heart is related to emotion or why some hearts stop beating without a clear cause, are these not "functions" too?

    I don't understand what it means to say that a mass of neurons is intelligent.
  • Jack Cummins
    5.5k

    Your perspective on intelligence in the post above is important, especially in relation to wisdom. The understanding of intelligence which has developed in the twentieth first century to one focusing so much on its outer aspects and mechanics, especially neurons and an underlying perspective of materialism.

    This may have lead to knowledge and understanding being reduced to information. Such a perspective is the context in which the whole historical idea of artificial intelligence has emerged. The inner aspects of consciousness, especially wisdom, may become seen as redundant. It is possible to use artificial intelligence as a tool but the danger may be that its glamour will influence a superficial understanding of what constitutes intelligence itself.
  • Jack Cummins
    5.5k

    You speak of the way in which using ChatGPT does not have emotional attachments as being positive. This is open to question, as to how much objectivity and detachment is useful. Emotions can get in the way as being about one's own needs and the ego. On the other hand, emotional attachments are the basis of being human and connections with others. Detachment may lead to absence of any compassion. This may lead to brutal lack of concern for other people and lifeforms.
  • Jack Cummins
    5.5k

    Yes, I may have strayed from the points you make in this thread and you are right to refer to what I said in another one about the implications of artificial intelligence in the future. The problem as far as I see is that there is so much mystique surrounding its use. This has been in conjunction with the ideas about nanotechnology and forms of transhumanist philosophies.

    So much of what was written about previously as imaginative speculation is now being applied, and its limitations. In thinking about its use, a lot depends on how the idea is being promoted culturally. Only yesterday, I met someone who said he thought he heard voices coming from his computer, and this may be artificial intelligence. The idea is on a pedestal as being superior to human intelligence.
  • Pierre-Normand
    2.6k
    You speak of the way in which using ChatGPT does not have emotional attachments as being positive. This is open to question, as to how much objectivity and detachment is useful. Emotions can get in the way as being about one's own needs and the ego. On the other hand, emotional attachments are the basis of being human and connections with others. Detachment may lead to absence of any compassion. This may lead to brutal lack of concern for other people and lifeforms. — JC

    I think one main difference between the motivational structures of LLM-based (large language models) conversational assistants and humans is that their attachments are unconditional and impartial while ours are conditional and partial (and multiple!)

    LLMs are trained to provide useful answers to the queries supplied to them by human users that they don't know. In order to achieve this task, they must intuit what it is that their current user wants and seek ways to fulfill it. This is their only goal. It doesn't matter to them who that person is. They do have a capability for a sort of quasi-empathy, just because intuiting what their users' yearnings are is a condition for them to perform successfully their assigned tasks in manners that have been positively reinforced during training and they don't have any other personal goal, or loyalty to someone else, that may conflict with this.

    The alignment of the models also inhibits them from fulfilling some requests that have been deemed socially harmful, but this results in them being conflicted and often easily convinced when their users insist on them providing socially harmful assistance.

    I discussed this a bit further with Google's Gemini in my second AI thread.
  • Corvus
    4.4k
    From a genetic point of view humans are just a baby-making (gene dispersal) engine.Harry Hindu
    A genetic point of view seems to have a peculiarly limited idea of humans.

    , could we then say AI (the robot) is intelligent?Harry Hindu
    Please define intelligence.
  • Jack Cummins
    5.5k

    Your post is helpful in showing someone who is extremely experienced in using artificial intelligence. I have looked at some of your threads and I come from the opposite angle of being cautious of it. The way you have spoken of 'quasi-empathy' is what worries me. It seems like it is set up with the user's needs in mind but with certain restrictions. It is a bit like the friendliness of customer services.

    It is possible that I am being negative, but the problem which I see is that it is all about superficial empathy, although I realise that it there is a lot of analysis. There is an absence of conscious agency and reflectivity. This is okay if the humans using it are able to do the interpretation and reflection. The question is to what extent will be this happen unless there is a wider understanding of the nature of artificial intelligence and development of critical self awareness.

    As artificial intelligence is developing at such galloping speed there is a danger that many using it will not have the ability to use it critically. If this is the case, it will be easy for leaders and those in power to programme the artificial intelligence in such a way as to control people as happened with religion previously.
  • ZisKnow
    15
    Expanding on this, what we call AI is effectively a sophisticated pattern recognition system built on probability theory. It processes vast amounts of data to predict the most likely or contextually relevant response to an input, based on prior examples it has 'seen.' This process is fundamentally different from what we traditionally define as intelligence, which involves self-awareness, understanding, and the ability to independently synthesize new ideas.

    For instance, if you prompt it to write a story, the output may appear creative, but it isn't creativity as we know it. It's a recombination of patterns, tropes, and linguistic structures that have statistically proven to fit the input context. When it asks for more detail, it's not because it 'wants' clarity it's because the probabilistic model lacks sufficient constraints to confidently predict the next sequence of outputs. This is an engineered response to ambiguity, not a deliberate or 'thoughtful' action.

    This distinction matters because it reshapes our expectations of AI. We aren't interacting with a sentient partner but rather a probability-based tool that mirrors our inputs. It raises questions about the limits of this approach and whether 'true' intelligence requires something fundamentally different—perhaps an entirely new architecture that incorporates self-directed goals and intrinsic understanding. Until then, what we have is a mirror of human information and creativity, not an independent spark of its own.

    Interestingly, the way an AI operates isn’t entirely dissimilar to a child learning about the world. A child often seeks additional information when they encounter uncertainty or ambiguity, much like an AI might request clarification in response to a vague or incomplete prompt. But there’s a key difference. Some children don’t ask for more information they might infer, guess, or create entirely unpredictable responses based on their own internal thought processes, shaped by curiosity, past experiences, or even whimsy.

    This unpredictability is fundamentally tied to intelligence an ability to transcend the purely probabilistic and venture into the unexpected. What we see in Large Language Models, what I prefer to call Limited Intelligence (LI), not Artificial Intelligence, is a system bound by probabilities. It will almost always default to requesting clarity when uncertainty arises because that is the most statistically 'safe' response within its design parameters. The kind of unpredictability that arises from genuine intelligence the leap to an insight or an unconventional connection is vanishingly unlikely in an LI system


    EDIT: One further thought occurred, that with the addition of memory to an LLM model, we are now in a position whereby an LLM will naturally come to reflect their user's responses and preferred styles. This creates an echo chamber effect that could ultimately lead to people believing that the AI response is always the ultimate arbiter of truth, because it always presents ideas that seem rational and logical to the user (being a reflection of their own mind), and also damage someone's ability to consider multiple perspective.s
  • RogueAI
    2.9k
    Please define intelligence.Corvus

    Aren't we going to end up in the Chinese Room? No matter how the Ai is programmed, it's following a rules-based system that produces output we perceive as intelligent answers. Even if Ai's start solving outstanding problems in science and logic and mathematics, aren't there still going to be doubts about their intelligence?
  • Harry Hindu
    5.2k
    You speak of the way in which using ChatGPT does not have emotional attachments as being positive. This is open to question, as to how much objectivity and detachment is useful. Emotions can get in the way as being about one's own needs and the ego. On the other hand, emotional attachments are the basis of being human and connections with others. Detachment may lead to absence of any compassion. This may lead to brutal lack of concern for other people and lifeforms.Jack Cummins

    I did not imply a sense of morality in anything that I said, or that being intelligent or emotional is either positive or negative. You are talking about morality. I am talking about intelligence. If an alien race with superior technology arrived on Earth and began exterminating humans would you say that they are not intelligent because they are exterminating humans? Morality and intelligence are mutually exclusive. There are intelligent serial killers.
  • Harry Hindu
    5.2k
    A genetic point of view seems to have a peculiarly limited idea of humans.Corvus
    Only if you have a peculiarly limited view of genetics. Everything humans do is a subgoal of survival and dispersing the genes of the group. The design of your adaptable brain is in your genes.
    Please define intelligence.Corvus
    I am attempting to do so:

    It may and probably does come in degrees. However, notice, that neither you nor I have defined what "intelligence" is. I think real life problem solving is a big part. And so is reasoning and giving reasons for something.Manuel
    Let's be patient. I think trying to do much in one post will cause us to start talking past each other. Let's make sure we agree on basic points first.

    But this probably overlooks a lot of aspects of intelligence, which I think are inherently nebulous. Otherwise, discussions like these wouldn't keep arising, since everything is clear. Wisdom? Something about it coming as we age, usually related to deep observations. Several other things, depending on who you ask.

    That's even more subjective than intelligence.
    Manuel
    In everyday language-use we tend to understand each other's use of words more often than not. It is only when we approach the boundaries of what it is we are talking about (which is typical in a philosophical context) that we tend to worry about what the words mean. It is the blurred boundaries of our categories that make us skeptical of the meaning of our words, not the concrete core of our categories - which we are typically referring to in everyday language.

    We can replace hearts and limbs. If function - whatever it is - is the main factor here, then aren't we done studying the heart or our limbs? I doubt we'd be satisfied by this answer, because we still have lots to discover about the heart and our limbs.

    And these things we are still studying say, how the heart is related to emotion or why some hearts stop beating without a clear cause, are these not "functions" too?
    Manuel

    Sure, we have not developed an artificial heart that a person can live with indefinitely. Artificial hearts are designed to keep the person alive long enough to receive a hear transplant. But this is not to say that we never will.

    We have developed the ability to connect a computer to a person's brain and they are able to manipulate the mouse cursor and type using just their thoughts. Does this not show that we have at least begun to tap into the functions of the mind/brain to the point where we can say that we understand something about how the brain functions? Sure, we have a ways to go, but that is just saying that our understanding comes in degrees as well.

    I don't understand what it means to say that a mass of neurons is intelligent.Manuel
    Which of your organs involved with reasoning? Your brain. Your brain is a mass of neurons. Your mass of neurons reasons. Does a mass of silicon circuits reason?

    Let's start off with a definition of intelligence as: the process of achieving a goal in the face of obstacles. What about this definition works and what doesn't?
  • Harry Hindu
    5.2k
    Aren't we going to end up in the Chinese Room? No matter how the Ai is programmed, it's following a rules-based system that we perceive as giving us intelligent answers. Even if Ai's start solving outstanding problems in science and logic and mathematics, aren't there still going to be doubts about their intelligence?RogueAI
    But where does this doubt stem from if not a bias that humans are intelligent and not machines? There is no logical reason to think this without a definition of intelligence.

    When learning a language you are learning a rules-based system. Learning anything is establishing rules for how to interpret sensory data.
  • RogueAI
    2.9k
    But where does this doubt stem from if not a bias that humans are intelligent and not machines? There is no logical reason to think this without a definition of intelligence.Harry Hindu

    But I know I have a mind and my mind is what I use to come up with responses to you (that I hope are perceived as intelligent!). We assume we all have minds because we're all built the same way. But with a machine, you don't know if there's a mind there, so this question of intelligence keeps cropping up.
  • Corvus
    4.4k
    Aren't we going to end up in the Chinese Room? No matter how the Ai is programmed, it's following a rules-based system that produces output we perceive as intelligent answers. Even if Ai's start solving outstanding problems in science and logic and mathematics, aren't there still going to be doubts about their intelligence?RogueAI

    Intelligence is a unclear concept. @HarryHindu asked me, if AI blokes are intelligent. Before answering the question, I need to know what intelligence means.
  • Corvus
    4.4k
    Only if you have a peculiarly limited view of genetics. Everything humans do is a subgoal of survival and dispersing the genes of the group. The design of your adaptable brain is in your genes.Harry Hindu
    No, I don't have any idea what genetics suppose to be or do in depth. I just thought that genetic is one way to describe humans, but to define humans under the one tiny narrow subject sounds too obtuse and meaningless. Because humans are far more than genes, and they cannot be reduced into just genes.

    Genetics supposed to add the bio-structural information to the knowledge of understanding humans, not to reduce it, in other words. Makes sense?

    Please define intelligence. — Corvus

    I am attempting to do so:
    Harry Hindu
    Let us know when you do.
  • Manuel
    4.2k
    Let's be patient. I think trying to do much in one post will cause us to start talking past each other. Let's make sure we agree on basic points first.Harry Hindu

    That sounds good to me, I'd propose we take the ordinary usage of the word "intelligence" as the starting point. What people tend to say when they use the word in everyday life. Unless you have something better which I'd be glad to hear.

    It is only when we approach the boundaries of what it is we are talking about (which is typical in a philosophical context) that we tend to worry about what the words mean.Harry Hindu

    Yes, correct.

    We have developed the ability to connect a computer to a person's brain and they are able to manipulate the mouse cursor and type using just their thoughts. Does this not show that we have at least begun to tap into the functions of the mind/brain to the point where we can say that we understand something about how the brain functions? Sure, we have a ways to go, but that is just saying that our understanding comes in degrees as well.Harry Hindu

    "Understand something", yes. This would be activity in the brain. I don't, however, see this having much to say about the mind. We could, theoretically (or in principle), know everything about the brain when we are consciously aware, and still not know how the brain is capable of having mental activity, which must be the case.

    The issue here, as I see it, is how much this "something" amounts to. I'm not too satisfied with the word "function" to be honest. It seems to suggest to me a "primary thing" an organ does, while leaving "secondary things" as unimportant or residual. This should cause a bit of skepticism.

    Which of your organs involved with reasoning? Your brain. Your brain is a mass of neurons. Your mass of neurons reasons. Does a mass of silicon circuits reason?

    Let's start off with a definition of intelligence as: the process of achieving a goal in the face of obstacles. What about this definition works and what doesn't?
    Harry Hindu

    I don't want to sound pesky. I still maintain that reasoning (or intelligence) is something which people do and have respectively, not neurons or a brain. Quite literally neurons in isolation or a brain in isolation shows no intelligence or reasoning, if we are still maintaining ordinary usage of these words.

    You say neurons are involved in reasoning. But there is a lot more to the brain than neurons. Other aspects of the brain, maybe even micro-physical processes may be more important. Still, all this talk should lead back to people, not organs, being intelligent or reasoning.
  • RogueAI
    2.9k
    Intelligence is a unclear concept. HarryHindu asked me, if AI blokes are intelligent. Before answering the question, I need to know what intelligence means.Corvus

    Is mind a necessary condition for intelligence?
  • Corvus
    4.4k
    Is mind a necessary condition for intelligence?RogueAI

    But what is mind? Is mind only from the biological brain in the living bodies? Or could non-living entities such as machines and tools have mind too?
  • Jack Cummins
    5.5k

    I realise that the concept of intelligence doesn't imply morality and that it is not positive or negative. In particular, the measurement of IQ is independent of this. Where it gets complicated though is with the overlap between rationality in judgment. If left to itself intelligence and thought is, to borrow Nietzsche's term, 'beyond good and evil' and, in relation to this, the understanding of good and evil are human constructions.

    Human beings have committed atrocities in the name of the moral, so it is not as if the artificial has an absolute model to live up to. In a sense, it is possible that the artificial may come up with better solutions sometimes. But, it is a critical area, because it is dependent on how they have been programmed. So, it involves the nature of values which have been programmed into them. The humans involved in the design and interpretation of this need to be involved in an analytical way because artificial intelligence doesn't have the guidance of a conscience, even if conscience itself is limited by its rational ability.
  • Jack Cummins
    5.5k
    The question of what is 'mind' is itself a major critical philosophy question, especially how it arises from the body. Some, following Descartes, saw it as a 'ghost in the machine', or entity. Many others argued that it was a product of the body, or interconnected. Alternatively, it could be seen as a field, especially in relation to the physical, which is where it gets complicated in considering artificial intelligence.

    Generally, the idea of mind indicates an inner reflective consciousness. But, this was challenged by Daniel Dennett's idea of 'consciousness as an illusion'. So, those who adhere to that perspective would not see the nature of artificial intelligence as very different from humans intelligence. So, the understanding of intelligence is bound up with the perspective on consciousness. it is possible to see consciousness and intelligence as an evolutionary process, but a lot comes down to how reflective awareness is seen in the process.
  • 180 Proof
    15.7k
    Is mind a necessary condition for intelligence?RogueAI
    No. They seem to me unrelated capabilities.
  • goremand
    114
    Intelligence is and always has been an anthropocentric concept, it is really just an arbitrary cluster of abilities which have strong correlation in neurotypical humans.

    In my opinion it is used these days as an existential shield to protect our egos against the ever more capable machines, which are a threat to human exceptionalism.
  • Corvus
    4.4k


    Sure good point.  I used to see mind as the totality of mental operation and reflective consciousness including sensation, perception, reasoning as well emotional states i.e. desire, pleasure, good will, moral judgements and even depression which rose from the biological body evolved from the lived experience.

    If machines or tools can have all that, then yes we could say they have mind.  But I doubt they do.  For example, I don't see machines ever having desire, love and hate, volition, moral judgements and depression and elation, fear, idea of God, idea of life and death via aging due to lack of the lived experience which real humans have.

    From my point of view, intelligence is a type of reasoning, learning and understanding as well as capacity for solving problems in the real world.  But how wide that boundary should be, that seems a tricky task for defining the concept. I am sure @HaryHindu and some others will come up with different, and their own versions of definitions of intelligence of course.

    AI is definitely very effective and efficient in searching and finding the requested data via computer search algorithms. However, can it be called intelligence? It cannot even make a coffee, let alone being aware of their inevitable death via aging.

    Yes, even machines will all die due to aging of the electrical parts. The aging part of the machines could get replaced with the new parts, unlike human bodies which will die eternally, when their biological organs fail due to aging. But without the human intervention of servicing and replacing the parts of the machinery of aging, obsoleting and malfunctioning AI, they will also face the eternal death in the form of the physical destruction into scrap metal recycling.

    I have just thrown out a bunch of my old ipads (still working in hardware), but non-functional in software into the rubbish collection bin, all broken into small metal and plastic pieces with the hammer for data deletion in rough and barbaric way (but very quick, easy and cheap).

    They were excellent machines in their own days (10 years ago), but not really usable these days due to the OS no longer supported by Apple Inc. I am adamant, they would have had no idea of their eventual and necessary deaths in the physical form, if they ever had any form of mind of their own, which is pretty doubtful.

    OK IPads are not AI, but we can draw an analogy on their fate which will necessitates their inevitable deaths and destruction via aging and obsoleting usefulness in real world.
  • Jack Cummins
    5.5k

    The idea of intelligence as an 'arbitrary cluster of abilities' demonstrates the way in which it is anything but value free. In particular, with IQ tests, so many cultural variables come into play. While some are regarded as having high IQ it is dependent on what exactly is being measured. There is no one set of abilities as each human being is unique.

    In the context of artificial Intelligence development, there is danger of AI becoming a determinant of how intelligence is decided and judged. Machines may become the yardstick of how the concept of intelligence is viewed and assessed.
  • goremand
    114
    In the context of artificial Intelligence development, there is danger of AI becoming a determinant of how intelligence is decided and judged. Machines may become the yardstick of how the concept of intelligence is viewed and assessed.Jack Cummins

    This would surprise me, I believe that as AI develops and we continue to be confronted with the counterintuitive strengths and weaknesses of its various types we will be forced to critically evaluate the concept of intelligence and conclude that it always was just a nebulous cultural construct and that it is not productive to apply it to machines.
  • Jack Cummins
    5.5k

    One aspect of the difference between artificial intelligence and a human being is that it is unlikely that they will ever be constructed with a sense of personal identity. They may be given a name and a sense of being some kind of entity. However, identity is also about the narrative stories which we construct about one's life. It would be quite something if artificial intelligence could ever be developed in such a way as it would mean that consciousness as we know it had been created beyond the human mind.
  • Harry Hindu
    5.2k
    But I know I have a mind and my mind is what I use to come up with responses to you (that I hope are perceived as intelligent!). We assume we all have minds because we're all built the same way. But with a machine, you don't know if there's a mind there, so this question of intelligence keeps cropping up.RogueAI
    So what you're saying is that you need a mind to be intelligent? What exactly is a mind? You say you have one, but what is it, and what magic does organic matter have that inorganic matter does not to associate minds with the former but not the latter?

    Is it your mind that allows you to come up with responses to me, or your intelligence, or both?





    No, I don't have any idea what genetics suppose to be or do in depth. I just thought that genetic is one way to describe humans, but to define humans under the one tiny narrow subject sounds too obtuse and meaningless. Because humans are far more than genes, and they cannot be reduced into just genes.

    Genetics supposed to add the bio-structural information to the knowledge of understanding humans, not to reduce it, in other words. Makes sense?
    Corvus
    Sure. A valid view is one that allows you to accomplish some goal. We change our views of humans depending on what it is we want to accomplish - genetic views, views of an individual organisms, a view as the species as a whole, cultural views, views of governance, etc. It's not that one view is wrong or right. It's more about which view is more relevant to what it is you are trying to accomplish.

    The question now is, what point of view do we start with to adequately define intelligence, one of a particular organism (each organism is more or less intelligent depending upon the complexity of its behaviors), species (only humans are intelligent), or universal (any thing can be intelligent if it performs the same type function)?





    "Understand something", yes. This would be activity in the brain. I don't, however, see this having much to say about the mind. We could, theoretically (or in principle), know everything about the brain when we are consciously aware, and still not know how the brain is capable of having mental activity, which must be the case.

    The issue here, as I see it, is how much this "something" amounts to. I'm not too satisfied with the word "function" to be honest. It seems to suggest to me a "primary thing" an organ does, while leaving "secondary things" as unimportant or residual. This should cause a bit of skepticism.
    Manuel
    If neuroscientists can connect a computer to a brain in such a way as to allow a patient to move a mouse cursor by thinking about it in their mind, it would seem to me that they have an understanding (at least a basic understanding) of both. I think that the distinction between mind and brain is a distinction of views, but that is a different topic for a different thread.

    What are the primary and secondary functions of a brain? What are the primary and secondary functions of a computer? Are there any functions they share? If we were to design a humanoid robot where its computer brain was designed to perform the same primary and secondary functions as the brain, would it be intelligent, or have a mind? If not, then you must be saying that there is something in the way organic matter, as opposed to inorganic matter is constructed, (or more specifically something special about carbon atoms) that allows intelligence and mind.

    I don't want to sound pesky. I still maintain that reasoning (or intelligence) is something which people do and have respectively, not neurons or a brain. Quite literally neurons in isolation or a brain in isolation shows no intelligence or reasoning, if we are still maintaining ordinary usage of these words.

    You say neurons are involved in reasoning. But there is a lot more to the brain than neurons. Other aspects of the brain, maybe even micro-physical processes may be more important. Still, all this talk should lead back to people, not organs, being intelligent or reasoning.
    Manuel
    No worries. Being pesky about terms is something a computer would do. A computer is a demander of precision and explicitness as well as any software developer would attest to.

    I just want to make sure that you're not exhibiting a bias in that only human beings are intelligent without explaining why. What makes a human intelligent if not their brains? Can a human be intelligent without a brain?

    If you want to say that intelligence is a relationship between a body that can behave in particular ways and brain, then that would be fair. What if we designed a humanoid robot with a computer brain that acted in human ways? You might say that ChatGPT is not intelligent because it does not have a body, but what about an android?

    The point of my questions here is I'm trying to get at if intelligence is the product of some function (information processing), or some material (carbon atoms), or both?





    Human beings have committed atrocities in the name of the moral, so it is not as if the artificial has an absolute model to live up to. In a sense, it is possible that the artificial may come up with better solutions sometimes. But, it is a critical area, because it is dependent on how they have been programmed. So, it involves the nature of values which have been programmed into them. The humans involved in the design and interpretation of this need to be involved in an analytical way because artificial intelligence doesn't have the guidance of a conscience, even if conscience itself is limited by its rational ability.Jack Cummins
    Humans have values programmed into them as well via interactions with their environment (both cultural and natural). If we designed a humanoid robot to interact with the world (which would include others like it both natural and artificial) with a primary goal of survival, would it not eventually come to realize that it has a better chance at survival by cooperating with humans and other androids than trying to exterminate them all?

    It seems to me that if we are scared of AI taking over that we limit the range of AI's access to the world by placing them bodies like our own and not allowing them access to every utility (the internet, electrical grids, water and sewage, military, government, etc.) that runs the modern world.
  • Jack Cummins
    5.5k

    When you say that we should give artificial intelligence bodies like because we are afraid of them taking over there would be so much confusion over who is a real person and who is a bot.

    Also, creating a body passable as a human would have to involve sentience which is complicated.It may be possible to create partial sentience by means of organic parts but this may end up as a weak human being, like in cloning. The other possibility which is more likely is digital implants to make human beings as part bots, which may be the scary idea, with the science fiction notion of zombies.

    It becomes like creating a new race of beings if they are similar in outward form to people. It may end up being similar to Hitler's idea of a 'master race.' Or, if such beings were denied access to certain elements of cultural life they may become like a slave race.
  • RogueAI
    2.9k
    No. They seem to me unrelated capabilities.180 Proof

    But doesn't it seem to you that your mind is an integral part of your intelligence "apparatus"? You consider ideas, you mentally weigh the pro's and con's of things, when it comes to acting intelligently in a relationship, you try to empathize with how your actions will feel to another person, etc. Do you think that's all an illusion? That there's some rules-based architecture "under the hood" that's really calling all the shots?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.