• Forgottenticket
    215
    https://www.theguardian.com/science/2017/feb/12/daniel-dennett-politics-bacteria-bach-back-dawkins-trump-interview

    Dennett in his latest book argues that we are heading into a world of post-intelligent design. For a long while we have had top-down human intentional design that could be understood and comprehended by one person. More and more, we are seeing bottom-up design in the form of genetic algorithms, deep learning et al.
    Is having complete knowledge important? Or could humanity survive by existing within the dark without having to know the behind the scenes extras? If this happens what would a post-intelligent design world look like?
  • Noblosh
    152
    Or could humanity survive by existing within the dark without having to know the behind the scenes extras?JupiterJess
    Humanity's survival is off the point, I think the proper question would be: "Could individuals thrive without understanding that which would enable them to do so?", in which case I would argue they couldn't because they would no longer hold the mastery needed to advance themselves.
  • ssu
    8.6k
    Is having complete knowledge important? Or could humanity survive by existing within the dark without having to know the behind the scenes extras? If this happens what would a post-intelligent design world look like?JupiterJess
    I think Dennet makes a point. We can use machines, technologies and treatments and the whole panoply of what science and technology has given us and then have people thinking that the science is "fake", bogus or a political view.

    Dennet (from the above interview)
    The real danger that’s facing us is we’ve lost respect for truth and facts. People have discovered that it’s much easier to destroy reputations for credibility than it is to maintain them. It doesn’t matter how good your facts are, somebody else can spread the rumour that you’re fake news. We’re entering a period of epistemological murk and uncertainty that we’ve not experienced since the middle ages.

    If it would serve some political agenda or discourse to dispute let's say Einstein's relativity, then that dispute would rapidly spread in todays media getting strong support from those that believe in the political agenda behind it. The argument would be that it's something "unresolved" or biased. Scientists are just stooges in a huge conspiracy with evil intensions, if they talk that relativity is real. That would be the dismal and ugly way it would go.
  • jkop
    905
    ..what the postmodernists did was truly evil. They are responsible for the intellectual fad that made it respectable to be cynical about truth and facts. You’d have people going around saying: “Well, you’re part of that crowd who still believe in facts.”Dennett

    8-)
  • Colin B
    8
    I think that Dennett's own philosophical works might be entering into an area of post-intelligent design.
  • Sivad
    142
    "We’re entering a period of epistemological murk and uncertainty that we’ve not experienced since the middle ages."

    It's about time. I don't see it as a regression, we've been under an ossified paradigm of official narrative, pundit consensus, and pluralistic ignorance for far too long. There's a growing awareness of the "treason of the clerks" which is forcing people to confront the fact that we never found and were never on epistemic terra firma in the first place.
  • Terrapin Station
    13.8k
    I think that Dennett's own philosophical works might be entering into an area of post-intelligent design.Colin B

    I keep forgetting we don't have a "like" button here--I went to hit it for your comment.
  • Wayfarer
    22.5k
    There have been some discussions of Dennett's latest book on the forum already, and some links to reviews of it by Thomas Nagel in the NY Review of Books, Steve Poole in the New Statesman, and a long and detailed biographical sketch in The New Yorker.

    All of the reviews note that Dennett is a very erudite guy - also a good enough jazz pianist to make a living from it before he became tenured - and obviously brilliant. But they all reject, or at least deeply question, the fundamental tenet of his life's work:

    Dennett asks us to turn our backs on what is glaringly obvious—that in consciousness we are immediately aware of real subjective experiences of color, flavor, sound, touch, etc. that cannot be fully described in neural terms even though they have a neural cause (or perhaps have neural as well as experiential aspects). And he asks us to do this because the reality of such phenomena is incompatible with the scientific materialism that in his view sets the outer bounds of reality. He is, in Aristotle’s words, “maintaining a thesis at all costs.”[/url]
    — Thomas Nagel
    .. .by the end of this brilliant book, the one thing that hasn’t been explained is consciousness. How does first-person experience – the experience you are having now, reading these words – arise from the electrochemical interactions of neurons? — Steve Poole

    Turning it around - the problem Dennett has to explain away is the reality of first-person experience. If he can't explain it away, then materialism is false.

    I find it an easy dilemma to resolve.

    [Dennett]: The real danger that’s facing us is we’ve lost respect for truth and facts. .ssu

    What Dennett cannot afford to admit is that the 'truth and facts' of which he speaks must always be of the kind that are amenable to quantitative analysis and measurement; and that, therefore, most or all of what is of value in the study of the humanities is excluded by these criteria, because of the 'fact/value' or 'is/ought' dichotomy, first articulated by David Hume. Dennett's work, again, relies on the ability to explain values - such as judgements of meaning - in terms of facts - such as the measurement of neurochemical transactions across synapses. If indeed the former is not reducible to the latter, then his project fails.
  • Janus
    16.3k
    Someone sent me this via email the other day: r6xgia9j13c3webq.jpg
    Is it an example of "post-intelligent design"?
  • jkop
    905


    I'd say intelligence is a condition for any design (aleatoric design even).
  • Janus
    16.3k


    Yeah, some degree, I guess.
  • Janus
    16.3k


    I think Dennett is exaggerating in saying that in the past the best minds could understand almost everything. Perhaps it might have been true regarding the sciences, but not also literature, the arts, history, philosophy, metaphysics, languages, and so on. And even if a very few of the very best minds could understand "almost everything"; what import could that have for the rest of 'ignorant' humanity?

    I n any case, the notion that they would have understood everything relies on the premise that the science they understood was correct.
    Even if it were possible,why would it be so important that everyone (or some people?, or most people?) should understand the currently accepted mechanical principles behind how everything works, as opposed to understanding how to do their jobs? This seems like some form of scientism, I would say.

    What is the actual, practical difference between some very few exceptional individuals understanding how everything works and no one individual at all understanding how everything works, even within any given science?
  • Marchesk
    4.6k
    What is the actual, practical difference between some very few exceptional individuals understanding how everything works and no one individual at all understanding how everything works, even within any given science?John

    There's no way that anyone understands everything in any field of consequence. That's certainly been true in Information Technology for a long time, even without genetic algorithms and deep learning. The field is constantly expanding, and nobody has the time to learn everything.
  • Forgottenticket
    215
    More here:
    https://www.youtube.com/watch?v=IZefk4gzQt4

    Could individuals thrive without understanding that which would enable them to do so?", in which case I would argue they couldn't because they would no longer hold the mastery needed to advance themselves.Noblosh

    Yep, individuals would become overly reliant on AI. So it would look similar to communism (or the closest thing to equal skill-set) with everyone having equal talents assuming all of the AI is of the same software.

    But they all reject, or at least deeply question, the fundamental tenet of his life's work:Wayfarer

    Dennett's views on consciousness are odd and border on the metaphysical. From what I get, his view is that consciousness reduces to its job and its functional role within the brain (teleofunctionalism). It models the brain data so the body can known what to respond to. But how does the model make itself known? Is there a secondary model? And why do abstract functional roles have a phenomenal first person pov?
    His own theory falls into his homunclus fallacy he is always throwing around. But I made this thread mainly to discuss the black box science stuff. With this said, his theory of consciousness does relate and connect to this. See my reply to John below...

    I think Dennett is exaggerating in saying that in the past the best minds could understand almost everything. Perhaps it might have been true regarding the sciences, but not also literature, the arts, history, philosophy, metaphysics, languages, and so on. And even if a very few of the very best minds could understand "almost everything"; what import could that have for the rest of 'ignorant' humanity?John

    This is a good point so this is why I was asking. Dennett seems to think this is a use it or lose it scenario. That we will plunge back to the 19th century since eventually no one will understand anything. His solution is that we train the AIs to model themselves and so be able to tell us what will happen. However this (Dennett believes) will make the machines conscious and so they will need human rights which will overly complicate everything. Dennett thinks creating conscious machines is bad and so we need alternate solutions.
  • Noblosh
    152
    Yep, individuals would become overly reliant on AI. So it would look similar to communism with everyone having equal talents assuming all of the AI is of the same software.JupiterJess
    AI doesn't serve humanity, AI is a toolset. Not to mention an utopia is irrealizable...
  • Janus
    16.3k
    There's no way that anyone understands everything in any field of consequence. That's certainly been true in Information Technology for a long time, even without genetic algorithms and deep learning. The field is constantly expanding, and nobody has the time to learn everything.Marchesk

    I agree, and I was really asking what would be the difference even if they did.
  • Janus
    16.3k


    Yes, but I don't see any reason to believe that we will create machines that will become conscious.
  • Noble Dust
    7.9k
    Dennett is right about the dangers of a post-intelligent design world, but the problem is that it came about through materialism, which also happens to be part of his worldview. A materialist worldview leads to an emphasis on survival and flourishing, which leads to a dependence on technological innovation to bring about those states because the underlying assumption is that technology is fundamentally good: If the physical is the only aspect of the world, then the human condition needs to be dealt with through human manipulation of that one and only world. The problem with this underlying assumption about the goodness of technology is that it's a fallacy. Take the internet. It can be used to discuss philosophy, or consume child porn. The internet itself is a neutral piece of technology. A given aspect of the human condition, exerted by an individual, determines how any given piece of technology is used. It's a fallacy to assume that harnessing the physical world through technology will lead to an improvement in the human condition. But this view of technology and a materialistic worldview are inseparable, as far as I can tell. Dennett's whole project falls apart, as I believe Wayfarer already said. The human condition needs to be dealt with not through an apprehension of the physical, via technology, but through an apprehension of the metaphysical, via morality.
  • Wayfarer
    22.5k
    There are a lot of heavy hitters that believe that the Internet is on the verge of becoming a rea
    Intelligence. I read a story yesterday that Jeff Bezos believes that computers will very soon understand the whole of Wikipedia

    Facebook's Mark Zuckerberg and Amazon's Jeff Bezos believe we are five to 10 years away from computers being able to understand everything that's written in Wikipedia, not just translate.

    Ray Kurzweil has been saying something similar for years.

    But despite the fact that all these guys are billionaires, and I'm just a low-level offfice worker, I still say they're wrong, on the grounds that intelligence and information processing are fundamentally different some basic way. When you ask 'in what way' the answer is 'in just the way that all Daniel Dennett's books manage to ignore (and note that his book Consciousness Explained was dubbed 'Consciousness Ignored' by several philosophers.)

    Anyway, what's involved in seeing through materialism is a true gestalt shift, a radically different way of construing the nature of experience and therefore reality. All materialists are obliged to defend that there is an ultimate material or physical entity. In Dennett's views those are organic molecules:

    Love it or hate it, phenomena like this exhibit the heart of the power of the Darwinian idea. An impersonal, unreflective, robotic, mindless little scrap of molecular machinery is the ultimate basis of all the agency, and hence meaning, and hence consciousness, in the universe.

    To which Richard Dawkins cheerily adds:

    We are survival machines—robot vehicles blindly programmed to preserve the selfish molecules known as genes. This is a truth which still fills me with astonishment.

    Frankly it amazes me that apparently clever people can believe such things. I think it is something like a botnet attack - there is a materialist meme, rather like mental malware, and when it finds literate minds devoid of any spiritual intuition then it infests them and attempts to replicate through the Internet. It's scarily efficient although not so hard to see through if you can change your wavelength.
  • Streetlight
    9.1k
    *Yawn*. We've always-already been in a 'post-intelligent design' world, and the only people for whom this is an issue are those who've been under the illusion that we ever understood (or could, in principle, 'understand') - in anything more than a partial, interest-laden and provisional way - the forces at work in the world, along with the effects they have. Technology and its effects have never not outstripped our understanding of them, and it has always - and will continue - to play roles in shaping futures we can barely glimpse. It's only ever been within the confines of the four walls of the laboratory or the workshop - that is, in contextless, condition-fixed space - that anyone has ever had 'full comprehension' - and this because of the artificial (and useful) necessity keeping the scope at which that comprehension operates as fixed and small as possible.

    So he's right about 'hyper-fragility', but this isn't something new or novel - this has literally been the condition of the Earth since the beginning of it's existence.
  • Galuchat
    809
    Is having complete knowledge important? — JupiterJess
    nobody has the time to learn everything — Marchesk
    what would be the difference even if they did — JupiterJess

    The difference would be one of dependence or independence. Dependence enhances social control. Artificial intelligence can be controlled, natural intelligence cannot.

    Dennett seems to think this is a use it or lose it scenario. — JupiterJess
    Of course it is.

    AI doesn't serve humanity, AI is a toolset. — Noblosh
    A prescient warning is contained in this observation.

    There are a lot of heavy hitters that believe that the Internet is on the verge of becoming a real intelligence. — Wayfarer
    Information control is mind control. To what extent is information being controlled on your social networks?

    intelligence and information processing are fundamentally different in some basic way. — Wayfarer
    I don't see how. Intelligence is a measure of memory, knowledge, and controlled/automatic processing capacity.

    this isn't something new or novel — StreetlightX
    Nothing to see here. Move along.
  • Wayfarer
    22.5k
    Intelligence is a measure of memory, knowledge, and controlled/automatic processing capacity.Galuchat

    Nope. Don't accept that.
  • Galuchat
    809
    You may refer to:
    1) Cattell-Horn-Carroll Theory, Cross-Battery Approach (Flanagan, Ortiz, & Alfonso, 2007).
    2) Wechsler Adult Intelligence Scale (WAIS-IV) (Kaufman & Lichtenberger, 2006).
  • Noble Dust
    7.9k


    But what do you think about intelligence? There's some interesting thoughts about it in this thread, for instance:

    https://thephilosophyforum.com/discussion/1448/intelligence#Item_12
  • Wayfarer
    22.5k
    An anecdote. My very first ever university essay was on the subject of intelligence testing - so called IQ tests. It was in the psychology department. I argued at length that intelligence is something that can't be measured. I failed. The comment was 'wrong department'(i.e. I submitted a philosophy essay in response to a psychology assignment.)

    The etymology of the word 'intelligence' is interesting and worth contemplating.

    In any case, I maintain that intelligence has a qualitative aspect that will always defy measurement. There are people who are profoundly intelligent in some ways, and completely inept in others, like some artistic geniuses, or 'idiot savants'. I'm not say that to muddy the waters, but to argue the case that I believe 'intelligence' is in some sense always prior to any attempt to measurement, definition or specification. Even to say what it is, we have to specify what we mean by the word, which is what IQ testing does. And such tests might be quite effective, along some pre-decided criteria. But the general nature of intelligence will always be, I maintain, something that is beyond definition.
  • Galuchat
    809
    The comment was 'wrong department'. — Wayfarer

    In any case, we agree that the social sciences and humanities do not reduce to neuroscience, physiology, biology, chemistry, or physics.
  • Wayfarer
    22.5k
    Indeed. Actually behind my polemical bluster, I would like to try and bring out a serious point. I recall reading that neoplatonism talked of the significance of the '=' sign. Of course in one sense, the '=' sign is quite a mundane and almost trivial thing. But actually the ability to grasp the idea that 'X=X' or that 'X=Y' is central to the processes of rational intelligence (being, as it were, fundamental to the law of identity and therefore the other logical laws.)

    Now if we pause for a minute and think about non-symbolic intelligence - such as that of animals - then I don't know if they ever grasp that notion of '='. Their intelligence (or cognitive ability) seems to me to operate in terms of stimulus and response. So in one sense, they will know that 'fire means danger', but I don't think they could go the extra step and say that 'anything dangerous has "danger" in common with "fire".' So I don't know if an animal intelligence grasps the sense of meaning or equivalence, that is represented in the commonplace symbol "=".

    It is that background ability to assess and equate and impute meaning - to say that 'this means that', that 'because of this, then that must be', that strikes me as being foundational to the operations of rational intelligence.

    And then, 'rational' is derived from 'ratio', which again, is the ability to grasp proportion - to say that X is to Y, what A is to B, even if X, Y, A and B, are all different things. Which is related to the etymology of intelligence, which ultimately comes from 'inte-legere', meaning 'to read between', as in 'reading between the lines', or, maybe, 'seeing what something means'.

    Now, as we have that ability, then we can create devices, such as computers, to execute enormous numbers of such judgements in blinding speed. In the same article that I quoted earlier, it was noted that Microsoft said the power of its Azure cloud platform was such that it could translate the entire contents of Wikipedia from English to another language, in .1 of a second. But regardless, computers are ultimately the instruments of human intelligence; and I am still dubious that they will ever know what all (or any) of that information means.
  • Galuchat
    809
    It is that background ability to assess and equate and impute meaning - to say that 'this means that', that 'because of this, then that must be', that strikes me as being foundational to the operations of rational intelligence. — Wayfarer

    I would re-phrase it thus: reasoning (along with many other cognitive and intuitive functions) is a component of verbal modelling (which is the processing component of intelligence).

    Animals sense, interpret, and nonverbally model their environment, whereas; human verbal modelling provides an infinite capacity for description. For those who like to think that animals possess a language faculty (hence, modelling capacity) similar to that possessed by human beings, all I can say is: they communicate by means of physical signals, but what do they manufacture?

    So what is the agenda behind attempts to equate animal nonverbal modelling with human verbal modelling? Could it be to justify animal-like behaviour on the part of human beings?

    But regardless, computers are ultimately the instruments of human intelligence; and I am still dubious that they will ever know what all (or any) of that information means. — Wayfarer

    This presupposes that AI is being developed to "know what all that information means". I think we need to go back to Noblosh's comment and ask who is developing AI, and for what purpose?
  • Wayfarer
    22.5k
    So what is the agenda behind attempts to equate animal nonverbal modelling with human verbal modelling? Could it be to justify animal-like behaviour on the part of human beings?Galuchat

    I think that explains a lot of evolutionary-style thinking. Not that I'm for one minute aligned to any form of ID or creationism, but I'm very much aware of 'biological reductionism' or biologism. And I think it does 'give you something to live down to', i.e. it conveys that we're really just animals. What is that saying, 'the tyranny of low expectations'?

    From the Wikipedia entry on the philosophy of Michel Henry:

    Science [in the form espoused by Dennett et al] is a form of culture in which life denies itself and refuses itself any value. It is a practical negation of life, which develops into a theoretical negation in the form of ideologies that reduces all possible knowledge to that of science, such as the human sciences whose very objectivity deprives them of their object: what value do statistics have faced with suicide, what do they say about the anguish and the despair that produce it? These ideologies have invaded the university, and are precipitating it to its destruction by eliminating life from research and teaching.

    The fact that persons promoting these ideologies are looked upon as guides or 'public intellectuals' is both ironic and exasperating. But then, I suppose in a world where a man like Trump can be elected President, such degeneracy is only to be expected.
  • jkop
    905
    ..that reduces all possible knowledge to that of science.. — Wikipedia on the philosophy of Michel Henry


    As if there would exist many kinds of possible knowledge... :-}
  • Galuchat
    809
    Case in point?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.