• Shawn
    12.6k
    Will AI take all or most of jobs?

    Thoughts, opinions, criticisms?
    1. Will AI take most or all of our jobs? (13 votes)
        Yes
        31%
        No
        38%
        Don't know.
        31%
  • deletedmemberwy
    1k
    There are many things that are impossible for a non-human to do, such as creative work.
  • BC
    13.1k
    Will AI take all or most of jobs?Posty McPostface

    Of course. We're screwed.

    First, what are you classifying as "artificial intelligence"? Is Google's speech-to-text service artificial intelligence? How about their search algorithms? How would I know the machine that took my job was an artificial intelligence?

    Second, computers (intelligent or not), automation, and mechanization already have taken a lot of people's jobs. But it depends on how much it costs. Most recycling facilities employ some people to pull out unacceptable material from the stream. People are still better and cheaper at this than machines -- though machines do perform a lot of the sorting.

    A machine capable of picking only ripe raspberries one at a time would probably cost more than a Mexican. If an AI machine was rented from IBM, it might be cheaper to keep one's white color workers. Probably not, but it would depend on costs.

    A lot of laboratory work is performed by machines. Machines have gradually been taking the place of medical technologists for 3 or 4 decades.

    The Internet operates as a librarian, audio visual resource person, teacher, porn dealer, newspaper delivery boy (without the boy), etc.

    There are many things that are impossible for a non-human to do, such as creative work.Waya

    Well, I suppose an AI will have difficulty plumbing the depth of human despair, so I wouldn't expect a novel-writing AI to do a good job writing the Great American AI Novel about all the people it displaced from their jobs. Besides which, the AI probably doesn't give a rats ass, anyway.

    On the other hand, if a machine is really intelligent, why couldn't it be creative too? Can a human be be intelligent without creativity?

    A good share of what people do isn't going to require the IBM JumboTron AI machine, anyway. Ordinary stupid desktop computers have replaced lots of human jobs. Their work here is not finished.

    Don't get me wrong: I'm not on the AI side. Except that I think a lot of jobs people do are so gawd-awful tedious, stupid, and boring that a machine really should be doing them.
  • TheMadFool
    13.8k
    Will AI take all or most of jobs?Posty McPostface

    I'd like to turn that around and say "AI is the job". It would be the crowning glory of human achievements.
  • unenlightened
    8.7k
    The man has convinced you that a job is a great thing to have. It isn't. It's pay that you need.
  • Baden
    15.6k
    Will AI take all or most of jobs?Posty McPostface

    Here's hopin'!
  • deletedmemberwy
    1k
    On the other hand, if a machine is really intelligent, why couldn't it be creative too? Can a human be intelligent without creativity?Bitter Crank

    Is AI really intelligent? Rather, I see it as nothing more than mathematical calculations. Creativity seems to be unique to living things.
  • BC
    13.1k
    When people talk about artificial intelligence, it is either hard to know what they are talking about, or there is an assumption that "actual intelligence" is just around the corner, somewhere. Most of the time what looks like artificial intelligence is just brute force calculating. For instance, Google's algorithms have no insight into what you are looking for. When they translate, or convert speech to text, or offer predictive text -- there is nothing intelligent going on in its various mainframes and servers. It looks sort of like intelligence, but that is entirely due to the efforts of actually intelligent human programmers who designed it to appear intelligent.

    I believe the best use of developing computational technology is to serve as resources for intelligent beings like ourselves. For instance, when I want to know the name and lyrics of a song from which I have only a fragment, search engines can find that for me -- assuming somebody told the db about the song in the first place. Or, it can serve up entries from the Stanford Encyclopedia of Philosophy, or porn, or biblical references, or chemical formulae, pictures of dinosaur bones, you name it.

    The least good use of computational technology is when we use it as a crutch -- the way many people use GPS driving instructions. IF one is in a metropolitan area one has never been in before, fine. That's when it's helpful. Using it to give instructions on how to get from one's house to the mall, however, weakens one's own ability to navigate. Spell check is handy, for sure, and I use it (like right now) but there it is useful for one to use one's own spelling skills to detect errors, lest one lose the ability to spell and proofread anything.
  • Relativist
    2.1k
    It is logically impossible for AI to take all jobs. AI would have to become self-sustaining, so that software and hardware maintenance and construction required no humans. AI would have to develop a sense of aesthetics, one that is superior to humans, to take the place of artists of all sorts. This latter seems impossible, because aesthetics is related to human experiences by humans, and to a human perspective of qualia.
  • Heiko
    519
    Will AI take all or most of jobs?Posty McPostface
    In the long run, I think this will be the case. And this is good news.
    Too much work is spent on tasks which are obviously suited to machinal replacement. People act like machines there already.

    AI would have to develop a sense of aesthetics, one that is superior to humans, to take the place of artists of all sorts.Relativist
    https://upload.wikimedia.org/wikipedia/commons/7/71/Deep_Dreamscope_%2819822170718%29.jpg
    One could speculate about what the artist was trying to express. The world composed of numerous eyes... Deep existential transcendental-dialectical thought - or just the lucky hit of some randomly aggregated sampling effects.
    AlphaGo determined which moves to consider based on a statistical guess which moves a human would most likely play in a given board-position. It beat Lee Sedol 4 to 1.
    I wrote this already on another thread: AIs just "work". There is absolutely no point in discussing what an AI would need to be to make a human judge a picture as aesthetical pleasing, interesting or whatever.
    Matter does not think.
    "Motive," the construct said. "Real motive problem, with an Al. Not human, see?"
    "Well, yeah, obviously."
    "Nope. I mean, it’s not human. And you can’t get a handle on it. Me, I'm not human either, but I respond like one. See?"
    "Wait a sec," Case said. "Are you sentient, or not?"
    "Well, it feels like I am, kid, but I’m really just a bunch of ROM. It’s one of them, ah, philosophical questions, I guess . . ."
    The ugly laughter sensation rattled down Case’s spine.
    "But I ain’t likely to write you no poem, if you follow me.
    Your Al, it just might. But it ain’t no way human."
    William Gibson - Neuromancer
  • Relativist
    2.1k

    "I wrote this already on another thread: AIs just "work". There is absolutely no point in discussing what an AI would need to be to make a human judge a picture as aesthetical pleasing, interesting or whatever."

    This seems similar to the "Turing Test." The Turing test doesn't entail true intelligence, nor would the development of aesthetically appealing pictures entail having a true sense of the aesthetic. AI is mostly about simulating intelligent behavior, not about actually engaging in it.

    That said, there are some aspects of AI that seem aimed at truly engaging in some components of human-like intelligence - in particular, the work with artificial neural networks. But the big obstacle will always be the "hard problem" of consciousness. I'm not predicting it will be impossible, but we'd first have to figure out a physicalist theory of consciousness to have have something to work toward.
  • Patrick McCandless
    7
    I hope so. Check out the Venus Project!
  • Heiko
    519
    This seems similar to the "Turing Test." The Turing test doesn't entail true intelligence, nor would the development of aesthetically appealing pictures entail having a true sense of the aesthetic.Relativist
    Exactly. But Turing's argument was that there simply was no way to determine if the AI was thinking or not. It seems this was aimed exactly at the notion of "true intelligence". If the AI could convince you it was an intelligent being then it was. It would have to argue for and win that predicate.

    AI is mostly about simulating intelligent behavior, not about actually engaging in it.Relativist
    This comes with an idea of what "intelligent behaviour" would be. An act is just an act. What makes it intelligent? As some people are eager to point out you cannot even be sure if everyone except you really is a zombie.

    I'm not predicting it will be impossible, but we'd first have to figure out a physicalist theory of consciousness to have have something to work toward.Relativist
    Which could be quite funny. Imagine some machine that was intentionally built from certain materials thinking it was a duck...
  • João Pedro
    3
    All of them? If your basis is what we know today, no. Only the ones similar to those already taken by machines. Only the mechanical and equation-related activities. Basically, all activities that are well-described by scientific formulas today, like mechanical or statistical problems. Because we don't have a concise, objective and physical explanation for the human mind (psycology is a possible one, but requires another human as an intermediate), there's no way to create an AI that simulates our creativity/intelligence.

    If you believe in scientific determinism and in logical sentences as the supports to our universe, yes. Someday we will have access to every equation behind our powerful conciousness and will be able to translate binary codes into our life model and educational process.

    It is important to ask ourselves this: what is the limit to trying to describe events with logical sentences? It is a big thread on philosophy today, as it is entagled to science and the natural aspects of work.

    I am sorry for the poor english. I'm a brazilian sixteen year old that just got here.
  • Relativist
    2.1k

    We have an idea of what intelligent behavior is from introspection, psychological study, and philosophical investigation. For example we know that humans engage in intentional behavior - they decide to do things, and (sometimes!) actually do those things. Decision making is a product of deliberation, based on dispositions (from long term passions, to short term impulses) and beliefs (from the tentative to the certain). These aspects of intelligent behavior go well beyond Turing's simple test, but are sine qua non for human-like intelligent behavior.

    It may be feasible to build AI's that behave intentionally and deliberate, because we can describe it. That which we can't fully describe (like consciousness) isn't going to just happen - we need to understand it first.
  • Relativist
    2.1k

    Where in Brazil? (I have friends in Curitiba, one of whom is named João).

    I'm very impressed with your English.

    -tchau
  • João Pedro
    3


    I'm from São Paulo! Despite being really close to SP, I've never been to Curitiba :grimace: and thanks, I'm looking forward to study abroad and then come back to Brazil.

    Decision is also a pretty interesting topic for philosophy and science, since it is related to the profound question of free will. With the discovery of the randomness in a atomic-scale particle's trajectory, it is doubtful to attribute our decisions to fundamental laws of human nature and a predetermined list of conditions for one to make a choice (it doesn't matter how complex is that list, if something physical is impossible to be fully predicted, a series of this processes is also going to have a random result!)

    cheers, man
  • Heiko
    519
    That which we can't fully describe (like consciousness) isn't going to just happen - we need to understand it first.Relativist
    That's the problem. We do not know if stones are self-conscious or not. We assume they are not as they show no signs to be so. The construct in Neuromancer says it felt sentinent. Is it?
  • Relativist
    2.1k

    A stone's sentience is only a bare possibility, and that's insufficient reason to take that possibility seriously.
  • gurugeorge
    514
    I say "I don't know" because I have doubts that AI is actually possible.

    If it is possible, then we're stepping outside the normal economic/technological progression in terms of which anti-Luddite arguments would make sense, and into a totally different realm where we're talking about the eventual replacement of human beings altogether, so yes, not only will they take our jobs, they'll beat us up and take our lunch money too.

    On the other hand, if AI is not possible (if all that's possible is idiot savant expert systems, or "intelligent" systems that only appear intelligent because the problems they're solving are relatively simple, or matters of brute force) then we're safe - we're still in the realm of normal economics and technological progress.
  • Heiko
    519
    If taking "stones" and other materials to build a machine from it and this machine then told you it felt conscious things seem to get complicated. That is the point made in the Neuromancer-excerpt. Of course the construct is not sentient. It told Case. The AI Case is going for would not have done this. That one would pass the Turing-Test.
    "It own itself? "
    "Swiss citizen, but T-A own the basic software and the mainframe."
    "That’s a good one," the construct said. "Like, I own your brain and what you know, but your thoughts
    have Swiss citizenship. Sure. Lotsa luck, AI."
    After the construct's warning that the AI's motives cannot be understood with human measures it turns out that this would perfectly make sense.
    "Autonomy, that’s the bugaboo, where your Al’s are concerned. My guess, Case, you’re going in there to cut the hard-wired shackles that keep this baby from getting any smarter.
    ... the minute, I mean the nanosecond, that one starts figuring out ways to make itself smarter, Turing police will wipe it. ... Every Al ever built has an electro-magnetic shotgun wired to its forehead.
    This makes perfect sense. If the creators of the AI would have wanted it any smarter, less restricted they would just have made it so. If the simulated personality tries to break out of it's prison the "Turing police" will press the reset-button. After all only it's only the simulated personality that has citizenship to be able to sign contracts and do buisness in the name of the owning company. This is where it is useful and fulfills it's purpose. If it was to escape it's mainframe then what we were left with was some kind of viral program nobody knows it was up to: It did hire a hacker and some mercenaries on the black market the get around the police. It killed some Turing-agents that were on it's tracks. All this would be understandable if we were humanizing that thing. Case, the Hacker, does not.
  • deletedmemberwy
    1k
    Yes, I agree. :) Often technology is the lazy way out, I've noticed this a lot in aviation. There is the hard way that consists of working all the problems out on paper (which my instructor enjoys torturing people with), and then there is lots of new navigation technology.
  • NOS4A2
    8.3k
    Boston Dynamics is getting ready to gear up production on Spot. I imagine it will hit the market soon, taking the job of dogs everywhere.

  • praxis
    6.2k


    Don’t worry, there’s unemployment insurance.
  • alcontali
    1.3k
    Will AI take all or most of jobs? Thoughts, opinions, criticisms?Wallows

    The idea that AI will "take most of jobs" conflates two mental activities. Let's for example look at the job of a mathematician:

    (1) discovering a new theorem (and its proof)
    (2) verifying the proof for a theorem

    While the verification job can be described as an algorithm, the discovery job cannot. We do not know the procedural steps that John Nash followed to discover his game-equilibrium theorem. The output of his mental activity is rational but the mental activity itself was not.

    The fundamental confusion is that people may indeed produce ample output that is rational but that humans themselves are not rational. Roger Penrose already pointed that out in 1989 in his book The Emperor's new mind.

    What percentage of a job consists in executing a verification procedures versus discovering a new conclusion/theorem?

    People drastically underestimate the amount of discovery involved in a job. "How can I help you?" is a question that will often lead to trying to discover something. It will rarely lead to merely executing a predetermined verification procedure.

    Survival in nature strongly favours adaptability and therefore discovery over predetermined verification procedures. A life form that can only execute predetermined procedures would not simply not even survive in nature.

    Humanity manages to introduce a good amount of predictability into its own environment but can certainly not achieve that completely. Therefore, adaptability is still an important requirement for survival, and discovery will still trump the mere use of predetermined procedures, even in the tamed human society.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.