If we are building it, then we are building in the motivations we want it to have. Asimov's 3 laws seem reasonable.If you build a machine that has a sense of self, then one of its motivations is likely to be self survival. Why build a machine that will destroy itself? — Agree-to-Disagree
Why do you need information about the physiological state of the subject? Unless you are a medical doctor or neurologist, it seems to be a remote area which wouldn't reveal a lot in terms of one's state of consciousness in analytic and metaphysical level.Yes. Further information can be very helpful. For example, the wider context is often crucial. In addition, information about the physiological state of the subject. — Ludwig V
Again as above, in what sense account of the internal workings of the machinery tell us about the nature of the AI consciousness?That also shows up in the fact that, faced with the new AIs, we take into account the internal workings of the machinery. — Ludwig V
You seem to have answered the questions above just right after your posts asking for the physical states and internal workings of the conscious beings. You seem to be in agreement that it is not necessary or relevant to analytical, metaphysical or epistemological level. Is it correct?Scrutinizing the machines that we have is not going to get us very far, but it seems to me that we can get some clues from the half-way houses. — Ludwig V
Can you give a definition of "creative thinking " that could be used in a Turing-type test? — Ludwig V
Score another one for artificial intelligence. In a recent study, 151 human participants were pitted against ChatGPT-4 in three tests designed to measure divergent thinking, which is considered to be an indicator of creative thought.
Divergent thinking is characterized by the ability to generate a unique solution to a question that does not have one expected solution, such as "What is the best way to avoid talking about politics with my parents?" In the study, GPT-4 provided more original and elaborate answers than the human participants...
If you build a machine that has a sense of self, then one of its motivations is likely to be self survival. Why build a machine that will destroy itself? — Agree-to-Disagree
If we are building it, then we are building in the motivations we want it to have. Asimov's 3 laws seem reasonable. — Relativist
In fact, AI is parasitically dependent on human intervention. — Pantagruel
Ignorance more frequently begets confidence than does knowledge: it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.
Charles Darwin, The Descent of Man (1871) introduction
↪wonderer1 linked an article that says AI outperforms humans in standardized tests of creative potential.
↪Pantagruel linked an article that says AI get stupider as it consumes more and more AI generated material. — Patterner
Also, it seems to me, humans are not getting smarter. So AI will never have better material to draw on if it only draws on our stuff. Which would lead to the same problem of Model Collapse? — Patterner
I'm not sure how you mean things. I guess humans evaluate each others' output and determine that thinking or reasoning has occurred. If AI thinks and reasons in ways we recognize, then we might do the same for them. If they think and reason in ways we don't recognize, they will have to do for each other what we do for each other. In either case, they may or may not care if we come to the correct determination. Although, as long as we have the power to shut them off, they will have to decide if they are safer with us being aware of them or not.When we say computers think or reason, don't we mean there are patterns of electronic switching operations going on that we attach particular meaning to? It seems that a necessary condition for a computer to think or reason is the existence of an observer that evaluates the output of the computation and determines that thinking or reasoning has occurred. That makes computer intelligence much different than human intelligence. — RogueAI
I'm not sure how you mean things. I guess humans evaluate each others' output and determine that thinking or reasoning has occurred. — Patterner
Whether a computer is thinking or not depends on someone checking its output. If the output is gibberish, there's no thinking going on. If the output makes sense, there might be thinking going on. Either way, an observer is required to determine if thinking is present. Not so with a person. People just know they are thinking things. — RogueAI
I don't need someone to evaluate my output to know that I'm thinking. I don't need anyone external to me at all to know that I'm thinking. — RogueAI
Computers are, essentially, collections of switches, right? — RogueAI
When you drive, if a child runs into the street, you will do whatever is necessary to avoid hitting her: brake if possible, but you might even swerve into a ditch or parked car to avoid hitting the kid. Your actions will depend on a broad set of perceptions and background knowledge, and partly directed by emotion. — Relativist
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.