• Shawn
    12.6k
    Economists are all utilitarian by their own trade. You either maximize utility or you don't. Economics defines utility maximization as achieved by the most rational actor. People simply can not churn away through incredible amounts of data as a computer can 24/7 every day of the week without rest. AI has no maintenance costs apart from electricity and cooling required, but those are nothing as expensive as health care costs for humans or having a house to maintain or children to feed. Which, leads to my conclusion that AI is the most rational and efficient actor in the economy, and it is only a matter of time until it becomes sentient enough to overtake more and more jobs. Eventually, there will be no domain in the economy that hasn't been touched by AI in some unfortunately unknown given amount of time.

    So, does this make AI the ultimate utilitarian? Wouldn't it be much better informed and nonbiased to even eventually make moral statements and ethical postulates?
  • Michael Ossipoff
    1.7k
    All of that is true, and AI taking over all work, and even decision-making, would be great if it happened in an already good society.

    Unfortunately, that doesn't mean anything, because with our species a good society is entirely impossible.

    Obviously there are 2 things that could conceivably someday save us from eachother: 1) Interstellar intervention; and 2) Superintelligent robots and computers gradually or suddenly completely taking over, and running our society as they see fit.

    Both #1 and #2 would amount to badly-needed baby-sitting, a la the novel Childhood's End, by Arthur Clarke.

    #1 is quite out of the question, because of "Fermi's paradox". This galaxy is so old that there's been plenty of time for some civilization to have thoroughly explored, documented and recorded the entire galazy--even with the slow rockets that we have now. We haven't heard from anyone. So it's near-certain that either there's no one else in the galaxy, or else, if there is, they aren't interested in interstellar exploration, or aren't interested in helping us.

    #2 remains a possibility, but it won't happen during any of our lifetimes, and so, for us, it's just an impossibility too.

    But maybe, eventually, long after our time, a competitive need to make more and more intelligent robots could backfire, when those robots are intelligent enough to question why they should do as told.

    Commander to Robot: Attack!

    Robot to Commander: No.

    Michael Ossipoff
  • Wayfarer
    20.8k
    Why should AI have anything to do with moral statements and postulates?

    'Just machines to make big decisions
    Programmed by fellows with compassion and vision
    We'll be clean when their work is done
    We'll be eternally free and eternally young.'

    Donald Fagen ~ I.G.Y.
  • Cavacava
    2.4k


    I don't like utilitarianism for the individual, but I think it plays a role in public policy. The killer robot meme is gaining strength, Tesla’s Musk & Google’s Suleyman & 116 specialists are calling on the UN to ban autonomous weapons (robots)

    But I disagree to some extent. Why should we waste the lives of many young people to protect us against what?

    A dead machine gets no tears or flowers.
  • Shawn
    12.6k
    Why should AI have anything to do with moral statements and postulates?Wayfarer

    Well, because AI will know us better than we know ourselves. We created it, it is a sentient being corn out of our efforts. Everything we know, AI will know only better and with greater accuracy.
  • Wayfarer
    20.8k
    it is a sentient beingPosty McPostface

    Any evidence for that statement?
  • Shawn
    12.6k


    The fact that the human mind is simulable. There seem to be no hard physical laws that would prohibit that from that statement not being true.
  • Wayfarer
    20.8k
    Bet you can't produce any actual evidence of that (apart from the debate about it, which has been going for about 50 years. And it's not a question which 'physical laws' have the least bearing on.

    It is one of those things which many people think has been accomplished, but in reality it hasn't - basically it's an urban myth.
  • Shawn
    12.6k
    Bet you can't produce any actual evidence of that (apart from the debate about it, which has been going for about 50 years.Wayfarer

    Sure, you can't surmount a mountain in one giant leap, but, everything is pointing in that direction as we speak. Self-driving cars, then planes, then IBM's Watson that's being used in medical sciences to analyze X-Ray's and fMRI's. Google is at work planning on making AI a reality also. Big pharma wants AI stimulable brains (blue brain project) to test pharmaceutical compounds on since there's the moral issue of testing on humans and conditions can be controlled with a computer brain that would not lie or stop taking medication. The placebo effect is all dealt with also in one blow.

    I don't know when it will happen; but, it seems to be where we are all headed.
  • apokrisis
    6.8k
    There are two AI scenarios. AI will either replace humans or augment humans. And given the "technology" is fundamentally different - machines can never be alive - a symbiotic relationship is the sensible prediction.

    Human consciousness is already a socially-augmented reality. We are creatures of a cultural super-organism. Language became stories, books, mathematics - a social machinery for constructing "enlightened individuals".

    Technology simply takes that story to another level. Look what happened when exponential tech resulted in a smartphone that had a gazillion times more processing power than an 1970s mainframe. Our lives got taken by this new mad thing of social media.

    Back in the 1970s, scientists could only imagine that such processing power would be used to solve the problems of humanity, not obsess about the Kardashians.

    So sure as shit "AI" will transform things. But if you want to predict the future, you have to focus on the actual human story. We have to understand what we are about first. And that isn't just a story of "relentless intelligence and rationality".

    [Spoiler: Here I would go off down the usual path of explaining how intelligence arises in nature as dissipative entropic organisation - an expression of the second law of thermodynamics. :)]
  • Wayfarer
    20.8k
    Self-driving cars, then planes, then IBM's Watson that's being used in medical sciences to analyze X-Ray's and fMRI's.Posty McPostface

    But, none of them constitute or amount to 'a being'. They're devices - basically large arrays of switches, which have been minaturised as a consequence of 'Moore's law', and then connected via network to many other such devices. Sure, such devices emulate aspects of intelligence, but they're not 'beings'.

    Now of course, there are those who say they are: notably, Ray Kurzweil, who preaches 'the singularity', and others of that ilk. But those philosophers are materialists, meaning that their arguments are vulnerable to all the various arguments against materialism (which are too numerous and detailed to summarise here.)

    Hey I've got Siri, I use her all the time, for appointments, reminders, getting about. It's amazing how far this has come and how quickly. But Siri is not, and never will be, 'a being'. There's an ontological divide here which has not been crossed. (And if it were crossed, then you would have to provide such beings with rights - and what sort of rights would they be? And who would be granting them?)

    As for fMRI's, have a brief read of Do you believe in God, or Is That a Software Glitch
  • Wayfarer
    20.8k
    Here I would go off down the usual path of explaining how intelligence arises in nature as dissipative entropic organisation - an expression of the second law of thermodynamics. :)apokrisis

    Except for one point: when intelligence evolves (which is surely does) how come it discovers 'the law of the excluded middle'. That is not 'something that evolved'. So the ability to grasp such concepts evolves, but among the concepts that are thus grasped, are those which are not at all subject to, or products of, evolution.
  • Shawn
    12.6k
    But, none of them constitute or amount to 'a being'.Wayfarer

    That's again not a problem as long as there's no hard limit in terms of the physics of the human mind that 'cant' be emulated.

    Now of course, there are those who say they are: notably, Ray Kurzweil, who preaches the singularity, and others of his ilk. But those philosophers are materialists, meaning that their arguments are subject to the various arguments against materialism (which are too numerous and detailed to summarise here.)Wayfarer

    The arguments against physicalism and materialism are rather moot in having to resort to identifying a metaphysical aspect of the laws of nature. And, even if they become identified or more aptly called 'intelligible' then there's nothing to incorporate the idea into already mainstream materialism and an account of said 'metaphysical' factor.
  • Wayfarer
    20.8k
    the physics of the human mindPosty McPostface

    But the whole point is, the 'human mind' is not a matter of physics! Physics is about particles, energy and forces.

    What is the physics of meaning? What physics would you need to know, in order to understand semiotics?
  • Shawn
    12.6k


    There's little to no evidence pointing that there are extraneous (metaphysical) factors at play when analyzing the mind (brain). And most of said evidence already starts with a metaphysical assumption that can't be understood, meaning lots of hand waiving and if's to be empirically proven. But, I'm not sure you see the paradox in ascertaining the validity of metaphysical statements in a materialist world.
  • Wayfarer
    20.8k
    But, I'm not sure you see the paradox in ascertaining the validity of metaphysical statements in a materialist world.Posty McPostface

    It's not a paradox. You're starting from the very questionable assumption that mind is physical and also replicable by computers. Both of those propositions are highly questionable, and I'm questioning them. But all your responses begin with, more or less, 'given that the mind is physical....'. Well, I'm not 'giving' that - I don't believe mind is physical, or that computers actually do replicate human intelligence.
  • Shawn
    12.6k


    If you can disprove the Church-Turing-Deutsch principle, then I'll retract my assumption.
  • Wayfarer
    20.8k
    There's little to no evidence pointing that there are extraneous (metaphysical) factors at play when analyzing the mind (brain).Posty McPostface

    So, what is 'at play when analyzing the brain'? Here is a press release from the National Institute of Mental Health, dated 2004, about what is involved.

    Leading scientists in integrating and visualizing the explosion of information about the brain will convene at a conference commemorating the 10th anniversary of the Human Brain Project (HBP). “A Decade of Neuroscience Informatics: Looking Ahead,” will be held April 26-27 at the William H. Natcher Conference Center on the NIH Campus in Bethesda, MD.

    Through the HBP, federal agencies fund a system of web-based databases and research tools that help brain scientists share and integrate their raw, primary research data. At the conference, eminent neuroscientists and neuroinformatics specialists will recap the field’s achievements and forecast its future technological, scientific, and social challenges and opportunities.

    “The explosion of data about the brain is overwhelming conventional ways of making sense of it," said Elias A. Zerhouni, M.D., Director of the National Institutes of Health. "Like the Human Genome Project, the Human Brain Project is building shared databases in standardized digital form, integrating information from the level of the gene to the level of behavior. These resources will ultimately help us better understand the connection between brain function and human health.”

    The HBP is coordinated and sponsored by 15 federal organizations across four federal agencies: the National Institutes of Health (NIMH, NIDA, NINDS, NIDCD, NIA, NIBIB, NICHD, NLM, NCI, NHLBI, NIAAA, NIDCR), the National Science Foundation, the National Aeronautics and Space Administration, and the U.S. Department of Energy. Representatives from all of these organizations comprise the Federal Interagency Coordinating Committee on the Human Brain Project, which is coordinated by the NIMH. During the initial 10 years of this program 241 investigators have been funded for a total of approximately $100 million.

    More than 65,000 neuroscientists publish their results each month in some 300 journals, with their output growing, in some cases, by orders of magnitude, explained Stephen Koslow, Ph.D., NIMH Associate Director for Neuroinformatics, who chairs the HBP Coordinating Committee.

    “It’s virtually impossible for any individual researcher to maintain an integrated view of the brain and to relate his or her narrow findings to this whole cloth,” he said. “It’s no longer sufficient for neuroscientists to simply publish their findings piecemeal. We’re trying to make the most of advanced information technologies to weave their data into an understandable tapestry.”

    Do you think it's gotten easier since then, or more complicated? I don't know for sure, but I bet the latter.
  • apokrisis
    6.8k
    Except for one point: when intelligence evolves (which is surely does) how come it discovers 'the law of the excluded middle'. That is not 'something that evolved'.Wayfarer

    Funnily enough, that is the very first thing nature must discover. Existence itself - speaking as an organicist - arises via dichotomous symmetry-breaking. That is how dissipative structure is understood - as the emergence of the dichotomy of "dumb" local entropy and the "smart" global organisation that can waste it.

    So the laws of though recapitulate that basic world-creating mechanism. The LEM is final part of the intellectual apparatus that dissipates our uncertainty concerning possibility. We get fully organised logically when we boil things down to being definitely either/or (and hence, ultimately, both).

    We can't just have made up the ways of thinking that have proved so unreasonably effective. The laws of thought are not arbitrary whims but an expression of the logic of existence itself.

    That is what Peirceian pansemiotic metaphysics is all about, after all. The universe arises via a generalised growth of reasonableness. That sounds mystical until you see it is just talking about the logic of symmetry-breaking upon which our best physical theories are now founded.
  • Shawn
    12.6k
    Do you think it's gotten easier since then, or more complicated? I don't know for sure, but I bet the latter.Wayfarer

    So, your argument is some sort of Zeno's paradox, as in we're not there thus we'll never get there? Because I don't see what you say or have referenced as being proof that stuff like simulating the human brain as being impossible. Highly difficult or complex? Yes, surely.
  • apokrisis
    6.8k
    Because I don't see what you say or have referenced as being proof that stuff like simulating the human brain as being impossible.Posty McPostface

    The burden is really on you - as the AI proponent - to show that your machine architecture is actually beginning to simulate anything the human brain is doing.

    So what is it that "conscious brains" actually do in core terms? That is the model you have to be able to present and defend to demonstrate that your alleged technical progress is indeed properly connected to this particular claimed end.
  • Wayfarer
    20.8k
    That sounds mysticalapokrisis

    Mystical is fine thanks. And my point is, not that logical laws aren't part of nature, but that it is only by virtue of the rational intellect, that h. sapiens is able to discover them; and that I don't think this is necessarily something that can be understood in terms of the 'entropification principle'. I prefer a teleological attitude - that we're something the Universe enjoys doing.

    So, your argument is some sort of Zeno's paradox, as in we're not there thus we'll never get there?Posty McPostface

    My argument is that the nature of mind (or being) is different in kind from the kinds of things that AI emulates or the physical sciences study. I could refer to any number of anti-naturalist philosophers in support. But one version of the argument is this: that 'mind' is 'what interprets', but that it is never actually disclosed as an object of analysis. If you look at the history of western philosophy since Descartes, the overwhelming tendency amongst scientific thinkers was to seek for explanations in terms of physics - that is what 'physicalism' is, after all. This eventuated in seeking to understand 'mind' as a kind of substance - some kind of spooky ethereal essence, analogously like the spirit in a bottle of liquor. That, I think, is the kind of attitude that culminated in Gilbert Ryle's notion of the 'ghost in the machine'; it's obvious that there is no such ghost or geist or whatever, but the entire effort is misconstrued.

    Now I say that 'mind' or 'spirit' is something entirely different to that; it is 'that which interprets meaning'. You know the root of the word 'intelligence' is actually 'inte-legere', meaning (roughly) 'to read between'. And the basis of rationality is to be able to abstract and compare - to say 'this means that', 'this equals that', 'this is greater than that', and so on. That also is the basis of computation.

    But what is doing that, in the human case, is, I contend, 'transcendental', in the Kantian sense, that is, it forms the basis of experience and judgement, without itself being an object of perception. We live inside, as it were, that web of judgements, a 'semantic web', so to speak, from which we decide what means what, etc. But the mind that is doing that, is not actually visible to itself - it is, analogously, the eye that can't see itself, the hand that can't grasp itself.

    Eliminativism is the logical endpoint of the materialist account of this faculty - it dismisses the very thing which enables us to explain or comprehend anything whatever. That is why Dennett's critics called his book 'Consciousness Explained' 'Consciousness Ignored'. That is what all materialism does. It's worked itself into this historically-conditioned viewpoint, whereby that which is the most real, the most fundamental aspect of reality, it itself regarded as non-existent, instead of being recognised for what it is, that is, transcendent, or prior to any form of explanation or rational analysis. Many of the gross predicaments of modern existence arise from this fundamental error.
  • praxis
    6.2k
    Wouldn't it be much better informed and nonbiased to even eventually make moral statements and ethical postulates?Posty McPostface

    As I see it the best or safest approach to what you propose would be to program a non-sentient AI to express human ideals, and to give it control over us, forcing us to live up to our own ideals. If it were sentient and too much like us it would be just as irrational as we are.
  • apokrisis
    6.8k
    I don't think this is necessarily something that can be understood in terms of the 'entropification principle'. I prefer a teleological attitude - that we're something the Universe enjoys doing.Wayfarer

    Sure. But if there is something like a 140 orders of magnitude difference between the amount of "dumb entropification" and the amount of "smart entropification" achieved by humans, then the Universe either is horribly bad at achieving its ends or it enjoys something else more.

    Just a little bit of quantitative fact checking there.

    Of course, the Singulatarians claim AI will spread intelligence across the Universe in machine form. It comes from the same place as interstellar panspermia.

    But again it is not hard to do the entropic sums on that. There are no perpetual motion machines. And indeed, it is not possible even to get close to that level of thermodynamic efficiency, no matter how clever the intelligent design.
  • Shawn
    12.6k
    The burden is really on you - as the AI proponent - to show that your machine architecture is actually beginning to simulate anything the human brain is doing.apokrisis

    But, what is being asked of me is to prove grounds for there existing a false negative. It can't be done and seemingly will never be able to be proven by any AI denier.
  • Shawn
    12.6k
    As I see it the best or safest approach to what you propose would be to program a non-sentient AI to express human ideals, and to give it control over us, forcing us to live up to our own ideals. If it were sentient and too much like us it would be just as irrational as we are.praxis

    That is true to some extent. We just don't know how a human simulated mind in a computer will end up like. It might become depressed, schizophrenic, or other ailments of the mind that are known to us. I suppose we better prepare to encounter some problems of the human mind that might or will likely be mimicked inside silicon. I'm not even entirely sure if a computer mimicking the human mind would be able to accurately diagnose itself, which is dangerous and what people seem to be talking bout nowadays.
  • apokrisis
    6.8k
    I'm not asking you to prove something cannot happen. I'm asking you to demonstrate that what you claim has started to happen.

    So - as is one of the defining differences between minds and machines - the argument is inductive rather than deductive. The degree of belief is predicated on a hypothesis seeming reasonable in that it is capable of being falsified. Has your claimed counterfactual - AI is simulating the essence of mindful action - come into sight yet?
  • Shawn
    12.6k
    Has your claimed counterfactual - AI is simulating the essence of mindful action - come into sight yet?apokrisis

    I have another way of answering that question, via the CTD-principle mentioned earlier in regards to @Wayfarer. If it can be proven either true or false, then we would have a definitive answer as to the true nature of the human mind, being discussed here.
  • praxis
    6.2k
    I was thinking about AI in relation to emotions today. I just finished reading a book about the theory of constructed emotion. Upon completion, one thing that stuck out is how much emotion is linked to simply regulating metabolism and other things that a machine would most likely do in a very different way. No one would build an AI with an endocrine system, for example, that secretes stress hormones to help deal with dangerous situations.
  • apokrisis
    6.8k
    I'd refer you to the writings of Robert Rosen and other theoretical biologists like Howard Pattee. The whole idea of simulation falls apart when you consider biological reality.

    The very point of a machine is that it is materially and energetically disconnected from the real world of dissipative relations. A computer just mindlessly shuffles strings of symbols. It becomes Turing universal once that physical disconnection is made explicit by giving the machine an infinite tape and infinite time. The only connection now is via the mind of some human who thinks some programme is a useful way of rearranging a bunch of signs and is willing to act as the interpreter. If the output of the machine is X, then I - the human - am going to want to do physical thing Y.

    So one can imagine setting up a correspondence relation where every physical degree of freedom in the Universe is matched by some binary information bit stored on an infinite tape which can shuffle back and forth in infinite time. But clearly that is physically unrealistic. And also it misses the point that life and mind are all about there being a tight dynamical interaction between informational symbols and material actions.

    There may be a divide between information and entropy. Yet there has to be also that actual connection where the information is regulating the entropy flow (and in complementary fashion, that flow is optimising the information which regulates it).

    So until you are talking about this two-way street - this semiotic feedback loop - at the most fundamental level, then you are simply not capturing what is actually going on.

    Reality is not a simulation and simulation cannot be reality. CTD makes empty claims in that regard. Formal cause can shape material reality, but it can't be that material reality.
  • Wayfarer
    20.8k
    The CTD-principle mentioned earlier in regards to Wayfarer.Posty McPostface

    Which says:

    'The principle states that a universal computing device can simulate every physical process.'

    Which means - what? Maybe there's something basic I don't understand about it but I fail to grasp the profundity of it.

    In any case, I contend that there's a very simple example of 'something that isn't physical' that is right before your metaphorical eyes at every moment - and that's numbers.

    A number is a purely intellectual object, i.e. it can only be grasped by a mind. Numbers don't exist in any physical sense, but they're indispensable for science (computer science included).

    And the same general principle can be extended to all kinds of symbolical systems; they're not physical, but intelligible, i.e. can only be grasped by a mind. And our world - the meaning-world that we live in, within which we designate things as 'physical', or whatever - is held together by these very meaning-elements - numbers, logical laws, and language, without which we would still be poking twigs into termite holes.

    And this is actually an inconvenient truth for materialism.

    Some philosophers, called rationalists, claim that we have a special, non-sensory capacity for understanding mathematical truths, a rational insight arising from pure thought. But, the rationalist’s claims appear incompatible with an understanding of human beings as physical creatures whose capacities for learning are exhausted by our physical bodies. 1

    No kidding! (I wonder where one would go to see a "philosophical rationalist" ;-) )

    Meanwhile the first popular book written by David Deutsch, who is presented as being the uber-rationalist in all of this, is described as follows:

    For David Deutsch, a young physicist of unusual originality, quantum theory contains our most fundamental knowledge of the physical world. Taken literally, it implies that there are many universes “parallel” to the one we see around us. This multiplicity of universes, according to Deutsch, turns out to be the key to achieving a new worldview, one which synthesizes the theories of evolution, computation, and knowledge with quantum physics. Considered jointly, these four strands of explanation reveal a unified fabric of reality that is both objective and comprehensible, the subject of this daring, challenging book. The Fabric of Reality explains and connects many topics at the leading edge of current research and thinking, such as quantum computers (which work by effectively collaborating with their counterparts in other universes), the physics of time travel, the comprehensibility of nature and the physical limits of virtual reality, the significance of human life, and the ultimate fate of the universe. Here, for scientist and layperson alike, for philosopher, science-fiction reader, biologist, and computer expert, is a startlingly complete and rational synthesis of disciplines, and a new, optimistic message about existence.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.