• igjugarjuk
    178
    We might say that an essay is clear, but a literally transparent essay would be illegible. Perhaps we are to imagine a clear pool of water (the prose) through which it is easy to see to the bottom (its meaning). To what degree is calling an essay clear equivalent to painting a little clear light-blue pool of water on the cover of the book, a hieroglyph or emoji indicating this virtue?

    We might say that a concept or belief is foundational, but concepts and beliefs aren't stacked in 3D space so that one of them is on the bottom supporting the others. The hieroglyph here might be the legs of a table. The word imply developed from an original meaning of "to enfold, enwrap, entangle." Here tangled tree roots might suffice.

    Of course a purple eggplant emoji does not usually refer to a literal eggplant, and essays can be clear without being literally transparent, and rivers do have toothless mouths. That an eggplant can refer to a penis is less impressive, as a mere substitution of familiar objects for the eye, than the use of pictures of ordinary objects to refer to entities like implications. By means of a system of metaphors, perhaps dominantly those for the eye, we've successfully created a metacognitive vocabulary which references (or seems to reference) a menagerie of spectral objects (such as reference itself.)

    Both Derrida and Wittgenstein notice this, and both suggest semantic implications. Where does meaning live ? Does the individual 'hieroglyph' point somehow to a Platonic realm to what all our minds somehow have immediate access? This would be like the numeral 2 directing a peculiar inward organ, present in each of us, to the proper address in a quasi-Platonic space. Perhaps reference-in-itself lives there, so that an entire community can be wrong about reference at the same time. On the other end of the spectrum is a vision of proprieties of use as the only foundation of meaning. The arrow sign does not glow with something immaterial (or not in a simple way) but functions 'materially' in the way it's treated, in or as the flow of traffic around it.

    I welcome all kinds of tangents on this theme, but I continue to be fascinated by the individual's grip or lack thereof on the concepts/hieroglyphs employs. Is knowing what one is talking about more than a practical mastery of token trading? In what sense, if any, is meaning present?



    Frege ridiculed the formalist conception of mathematics by saying that the formalists confused the unimportant thing, the sign, with the important, the meaning. Surely, one wishes to say, mathematics does not treat of dashes on a bit of paper. Frege's ideas could be expressed thus: the propositions of mathematics, if they were just complexes of dashes, would be dead and utterly uninteresting, whereas they obviously have a kind of life. And the same, of course, could be said of any propositions: Without a sense, or without the thought, a proposition would be an utterly dead and trivial thing. And further it seems clear that no adding of inorganic signs can make the proposition live. And the conclusion which one draws from this is that what must be added to the dead signs in order to make a live proposition is something immaterial, with properties different from all mere signs.

    But if we had to name anything which is the life of the sign, we have to say that it is its use.
    If the meaning of the sign (roughly, that which is of importance about the sign) is an image built up in our minds when we see or hear the sign, then first let us adopt the method we just described of replacing this mental image by some outward object seen, e.g. a painted or modelled image. Then why should the written sign plus this painted image be alive if the written sign alone was dead? -- In fact, as soon as you think of replacing the mental image by, say, a painted one, and as soon as the image thereby loses its occult character, it ceased to seem to impart any life to the sentence at all. (It was in fact just the occult character of the mental process which you needed for your purposes.)

    The mistake we are liable to make could be expressed thus: We are looking for the use of a sign, but we look for it as though it were an object co-existing with the sign. (One of reasons for this mistake is again that we are looking for a "thing corresponding to a substantive.")

    The sign (the sentence) gets its significance from the system of signs, from the language to which it belongs. Roughly: understanding a sentence means understanding a language.

    As a part of the system of language, one may say, the sentence has life. But one is tempted to imagine that which gives the sentence life as something in an occult sphere, accompanying the sentence. But whatever accompanied it would for us just be another sign.
    — Witt
  • igjugarjuk
    178
    The 'phonocentrism' in the philosophical privileging of phonetic over idiographic scripts (as in Hegel) might be explained in terms of hiding from the implications of the hieroglyphic roots of human cognition. As Derrida famously noted, it's easy to understand the voice to have a special relationship with meaning. Sound is as invisible as the spectral entities understood to ride 'inside' words (another metaphor, another picture.) It's hard to believe that a sequence of banal pictures can deliver the perfect presence of an non-pictorial 'content,' as Wittgenstein points out in the quote above. Philosophy understands himself to transcend myth. Metaphors may be acceptable in their proper, subordinate place. Once they escape, though, we have chaos and sophistry.
    https://iep.utm.edu/met-phen/#SH1c
    Derrida, from the outset, will call into question the assumption that the formation of concepts (logos) somehow escapes the primordiality of language and the fundamentally metaphorical-mythical nature of philosophical discourse. In a move which goes much further than Ricoeur, Derrida argues for what Guiseseppe Stellardi so aptly calls the “reverse metaphorization of concepts.” The reversal is such that there can be no final separation between the linguistic-metaphorical and the philosophical realms. These domains are co-constitutive of one another, in the sense that either one cannot be fully theorized or made to fully or transparently explain the meaning of the other. The result is that language acquires a certain obscurity, ascendancy, and autonomy.

    Personally I don't want language to run away with us, and I don't want reason flood by ambiguity. But, just as knowing that passion distorts reason isn't the end of our responsibility for inferences, it's also the case that an awareness of the pictorial roots of our thought doesn't abolish our need to maintain discipline and avoid being misled by analogy. As Wittgenstein put, there's war to be fought against bewitchment, which may include avoiding being bewitched by talk of witches. Language on holiday and flies in bottles are of course metaphorical. So the cure is made from the poison. A series of metaphors, a series of crowbars, each used to jam in the crust of the others.
  • Agent Smith
    9.5k
    This maybe off-topic, so in advance, a thousand apologies.

    Mathematics & Logic seem to be reducible to algorithms of such low intelligence index that even mechanical machines and their successors electronic computers can perform them, not only that these contraptions can do it orders of magnitude faster, they do so with zero errors (re calculators & computer-generated proofs).

    It is assumed, not without good reason, that computers are all syntax and no semantics and this fact has very disturbing implications - we pride ourselves at being able to do logic & math, these skills we've decided define us, but this is hard to reconcile with the fact that not another life-form but actually inanimate machines can beat as hands down in both math and logic.

    In short, semantics, our forte, our strong suit, feels so small and insignificant compared to syntax, every computer's schtick!

    Add to that the fact that semantics/meaning is a controversial subject in philosophy. Semantics is under assault, it's losing the battle - a point in time may come when people will ignore it completely like how computers do today.
  • Pie
    1k
    It is assumed, not without good reason, that computers are all syntax and no semantics and this fact has very disturbing implications - we pride ourselves at being able to do logic & math, these skills we've decided define us, but this is hard to reconcile with the fact that not another life-form but actually inanimate machines can beat as hands down in both math and logic.Agent Smith

    I just read Erik Larson's The Myth of Artificial Intelligence. He makes a strong case that computers are doomed to stupidity, unless a necessarily unpredictable conceptual revolution changes the scene entirely. He unveils what in retrospect looks like an absurdity at the heart of the 'theology' of The Singularity.

    Our gift is not crunching through possibilities. Our gift is the initial abductive leap. We are also radically enworldled. It's very hard to give computers the near infinite background knowledge required for disambigulation. For instance, computers have struggled with 'the box is in the pen.' We humans can guess that 'pen' must refer to something one might keep pigs in rather than a writing utensil.

    We must be fair though. With the internet and advances in hardware, enough data and compute became available for brute-force-ish statistical methods to succeed to a practical level at simple translation. This is certainly a triumph, but I'm not holding my breath for the HAL-9000's insightful review of A Spirit of Trust.

    Semantics is under assault, it's losing the battle - a point in time may come when people will ignore it completely like how computers do today.Agent Smith

    Perhaps the reverse is true. Semantics is central, and computers may only reveal that by contrast.
  • Pie
    1k


    Somehow our heiroglyphs (metaphors) gather a meaning not originally there...a new abstract sense. The original image can fade completely.

    'Life' emerges from the PIE root *leip- "to stick, adhere."
    https://www.etymonline.com/search?q=life

    Life sticks around, a pattern that endures by self-replication. It's as if life doesn't want to go away.
  • Joshs
    5.2k
    Our gift is not crunching through possibilities. Our gift is the initial abductive leap. We are also radically enworldled. It's very hard to give computers the near infinite background knowledge required for disambigulation. For instance, computers have struggled with 'the box is in the pen.' We humans can guess that 'pen' must refer to something one might keep pigs in rather than a writing utensil.Pie

    It makes sense that a machine we call a ‘computer’ will be expected to interact with us via symbolic computation. As long as a representational, symbolic calculative model continues to ground our understanding of our thinking machines, we have no reason to expect that faster speed and greater memory capacity will achieve embodiment and adductive leaps.

    The fact that we now see our own cognitive functioning differently than we did when we relied on computational metaphors to explain human mentation means we that we ready to move beyond the era of computing machines. Our machines will always be able to approximate what we do , since they are but practical models influenced by our best explanations of how we think. As such they are our appendages , and what they can do , and how they do it , evolves along with our understanding of what we do and how we do it.

    If we now believe we are embodied, situated sense-makers, you can be sure we will soon produce machines that echo this. They may be wetware rather than silicon, closer to living things than to inanimate parts.
  • Pie
    1k
    If we now believe we are embodied, situated sense-makers, you can be sure we will soon produce machines that echo this. They may be wetware rather than silicon, closer to living things than to inanimate parts.Joshs

    The book I mentioned shows an awareness of the problem, but this does not mean we will soon have the solution. Can we circumvent or simulate millions of years of evolution?
  • Pie
    1k
    Our machines will always be able to approximate what we do , since they are but practical models influenced by our best explanations of how we think.Joshs

    AI has had several heartbreaking winters. The most recent 'thaw' was the success at deep learning at task like translation and image recognition. With mountains of data, fast hardware, and efficient algorithms, very narrow tasks now have famously brittle solutions.

    Our self-knowledge seems more 'semantic' or linguistic that algorithmic and mathematical, even if we can of course model ourselves that way too.
  • Joshs
    5.2k
    Our self-knowledge seems more 'semantic' or linguistic that algorithmic and mathematical, even if we can of course model ourselves that way too.Pie

    Our machines dont have to be algorithmic and mathematical. They are that way because we used to assume human cognition was that way.
  • Joshs
    5.2k
    The book I mentioned shows an awareness of the problem, but this does not mean we will soon have the solution. Can we circumvent or simulate millions of years of evolution?Pie

    This sounds like an example of treating history ( in this case natural history) as the ‘weight’ of the past. Technology , as human cultural evolution in general, was never about simulating a past, but rather constructing a future that moves further and further away from its past. The evolution of our thinking machines doesnt simulate a past natural evolutionary process.
  • Pie
    1k
    The evolution of our thinking machines doesnt simulate a past natural evolutionary process.Joshs

    The problem is that we are smarter than are machines still. They can crush us at narrowly specified tasks, yes, but we haven't been able to breath life into them. One might naturally ask how life (our general intelligence) was breathed into us. Evolution (which some describe as an algorithm) created us from something simpler, step by painful step by step. So far as I know, that's our only hint.

    Note that AI started with grand plans for a human-like Turing-test-passing conversationalist. It was forced to settle for less. It's not that we just realize that we should ask for more. We always wanted to play God and create the ADAM-9000.
  • Pie
    1k
    Our machines dont have to be algorithmic and mathematical. They are that way because we used to assume human cognition was that way.Joshs

    What else do you have in mind if not life itself ? We are the semantic computers that we'd like to be able to build out of something else than our own flesh. The easy route for generating general intelligible is just parenthood. This brings an ethical issue to mind. If we succeed too well, we might be guilty of attempted slavery. What do we want these things for ? Probably to work, possibly also as pets.
  • Joshs
    5.2k
    The problem is that we are smarter than are machines still. They can crush us at narrowly specified tasks, yes, but we haven't been able to breath life into them. One might naturally ask how life (our general intelligence) was breathed into us. Evolution (which some describe as an algorithm) created us from something simpler, step by painful step by stepPie

    How do we know that we’re smarter than machines? Machines are texts translated into material processes. If we are smarter than machines than we are smarter than the texts we create. But wait a minute. Who is this ‘we’ that I am referring to? We only know about this ‘we’ who is smarter than the text describing our machines by virtue of another text that represents our understanding of what ‘we’ can do that machines can’t.

    So it looks like a competition between two kinds of texts, one of which uses a symbolic computational language and the other a language we haven’t clearly articulated yet. That is to say, this second text hasnt been clearly enough articulated in order to render it as a new kind of machine text that functions comparably to the ‘we’ text.

    The problem with generating a satisfying new machine text (and material machine) isn’t due to our current machine models not being ‘alive’. They are very much alive in the sense that they express an objectively causal metaphysics that we have traditionally relied upon to articulate both our understanding of the living and the non-living.
    The problem lies in our dualistic causal model of the living that makes it necessary to split it off from the non-living via some kind of gap or gulf, the ‘breath’ or spirit of life, and encourages us to talk of natural history in terms of algorithmic processes. An algorithmically generated history is not a genuine history , it is what post-structuralists call a ‘historicism’ , which is no history at all , and no temporalization at all. It is the attempt to arrest time and change by enclosing it within a scheme.

    The problem is not that we don’t know how to create life out of the inanimate , our machine texts are already animate in that they are interactive texts. The problem is that we don’t have an adequate enough understanding of what it means to be animate. Our machines will ‘come to life’ as we continue to progress in our understanding of the life we and our machine appendages already are. Our technological advances will provide this insight by transforming our living engagements with our machines, not by recapitulating earlier steps in our own evolution, but by inventing new steps. That is precisely what our machines are and do. They contribute to the creation of new steps in natural evolution, just as birds nests, rabbit holes , spiders webs and other niche innovations do. Saying our machine are smarter or dumber than us is like saying the spider web or birds nest is smarter or dumber than the spider or bird. Should not these extensions of the animal be considered a part of the living system? When an animal constructs a niche it isnt inventing a life-form, it is enacting and articulating its own life form. Machines, as parts of niches , belong intimately and inextricably to the living self-organizing systems that ‘we’ are.
  • Pie
    1k
    How do we know that we’re smarter than machines? Machines are texts translated into material processes.Joshs

    Turing machines are very limited kinds of texts. https://en.wikipedia.org/wiki/Turing_machine

    I think the experts in the field can be trusted that they haven't achieved the dream. I did low-level research in this field myself (so I'm neither expert nor total outsider.) It's demystifying. SGD is just a crude but surprisingly effective search through parameter space, glorified curve fitting.

    Perhaps some alien species already has managed something more exciting. Perhaps our own species will in the future.
  • unenlightened
    8.7k
    The 'phonocentrism' in the philosophical privileging of phonetic over idiographic scripts (as in Hegel) might be explained in terms of hiding from the implications of the hieroglyphic roots of human cognition.igjugarjuk

    'Might be explained' might be reduced to a one dimensional string. Each atom computed one at a time. And then knitting rebuilds the world as interlocking network - the screen refreshed in the blink of an eye - almost as if more than one thought can be entertained at the same time.

    Meanwhile, the world has changed everything, all at once, and I cannot hope to keep up; I can barely walk and chew gum at the same time. That's how single minded I am.
  • Joshs
    5.2k
    I think the experts in the field can be trusted that they haven't achieved the dreamPie

    What exactly does this dream consist in? I would
    think those most likely to break through to new dimensions of thinking concerning what machines can do would not be experts within the field as it currently defines itself, but instead be located OUTSIDE the field.

    Perhaps our own species will in the future.Pie

    Why ‘perhaps’? What sort of mysterious barrier are you erecting in your imagination to technological innovation in this area? Maybe the dream is caught up within its own algorithmic prison . Rorty once described progress in thinking as more a matter of changing the subject than of elaborations of a theme, more a matter of dissolving problems rather than solving them, making the old ways ‘not even wrong’. The problem with both realism and its counterpart, anti-realism, is it believes ‘the way things really are’ is a metaphysical rather than a contingently , historically relative constraint. This is true of Popperian falsificationism , and even Hegel’s relativism was trapped within an algorithmic dialectical cage.
  • Agent Smith
    9.5k
    What you say is true of course - computers are, to put it bluntly, dumb but do remember they're showcased as the ultimate human invention. Odd that, oui monsieur? In short, that computers have an IQ comparable to a bumble bee says more about us than computers themselves.

    Anyway, do Wittgenstein's language games have any bearing on the issue? On such a view, playing a different language game (meaning is use) could be conflated with stupidity.
  • T Clark
    13k
    I welcome all kinds of tangents on this theme, but I continue to be fascinated by the individual's grip or lack thereof on the concepts/hieroglyphs employs. Is knowing what one is talking about more than a practical mastery of token trading? In what sense, if any, is meaning present?igjugarjuk

    I've been reading several books that seem relevant to the issues you've raised." By "been reading" I mean that I've started them and they're sitting on my table gathering dust.

    • "The Origin of Consciousness in the Breakdown of the Bicameral Mind” by Julian Jaynes - This is an odd book presenting an odd idea that I don't really buy. Even so, Jaynes has a really interesting section upfront where he describes how consciousness is built on a foundation of metaphor.

    • “Surfaces and Essences: Analogy as the Fuel and Fire of Thinking” by Douglas Hofstadter and Emmanual Sander - As you can see from the title, this book has a similar view, although the book has a broader scope than just consciousness.

    • "Metaphors We Live By" by George Lakoff and Mark Johnson - This book has a similar idea, but focuses on metaphors that are built into language rather than those which we create ourselves to connect ideas that might not seem connected, but which have personal meaning for each of us.

    Looking at my own thought and memory processes introspectively, I have always noted a metaphorical component. I have noticed that my ideas, memories, feelings generally have imaginary tags attached. The tags might be feelings, moods, or images. An idea with a particular tag tends to bring to mind those with similar ones. Letters and numbers tend to have colors associated with them. I'm not proposing that is how it works for everyone. It is my idiosyncratic way of seeing things, but I think it's consistent with what the authors are talking about.
  • Joshs
    5.2k
    In short, that computers have an IQ comparable to a bumble bee says more about us than computers themselves.Agent Smith

    If only. I wouldnt even compare computer intelligence favorably to a virus.
  • Agent Smith
    9.5k
    If only. I wouldnt even compare computer intelligence favorably to a virusJoshs

    Did a quick google search. There's no consensus on what the IQ of a computer is. Some even say it's 0. I wonder what Garry Kasparov (lost to Deep Blue) and Lee Sedol (lost to AlphaGo) have to say about this. :chin:
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.