• Michael
    14.4k
    I prefer to argue the more complex story myself - that colours are properties of a pragmatic or functional mind~world relation. So this is an enactive, ecological or embodied approach.apokrisis

    So enactivism?
  • Michael
    14.4k
    In summary, neural networks are discovering patterns that our visual and auditory systems find, without any programming telling them to do so, in the case of unsupervised learning. This lends support to mind-independent shapes and sounds out there in the world that we evolved to see and hear. And this would be direct, because neural networks have no mental content to act as intermediaries.Marchesk

    I don't think this is right. Presumably these neural networks simply recognise patterns in the magnetism on the hard drive, that although covariant with the visual shape (given whatever algorithms translate the input into binary), are not the same thing.

    And "discover" might not even be the right word. "Respond" is better. The deterministic behaviour of the computer causes it to output the word "circle" when the magnet passes over a particular arrangement of magnetised and non-magnetised pieces.

    You seem to be reifying our abstract description of how computers work.
  • apokrisis
    6.8k
    Still using those "direct" and "indirect" terms - as if they really mean anything, Apo?Harry Hindu

    Did you notice the thread title or read the OP?

    Your experience is part of the world, no?Harry Hindu

    Your experience is your world, no?
  • apokrisis
    6.8k
    So enactivism?Michael

    Correct. I said that.

    Presumably these neural networks simply recognise patterns in the magnetism on the hard drive...Michael

    A good point. The system has no eyes. It is just fed 18x18 chunks of pixels - strings of hex code.

    It might be worth checking the paper - https://arxiv.org/pdf/1112.6209v5.pdf

    Our deep autoencoder is constructed by replicating three times the same stage composed of local filtering, local pooling and local contrast normalization.

    In our experiments, the first sublayer has receptive fields of 18x18 pixels and the second sub-layer pools over 5x5 overlapping neighborhoods of features (i.e., pooling size). The neurons in the first sublayer connect to pixels in all input channels (or maps) whereas the neurons in the second sublayer connect to pixels of only one channel (or map).

    While the first sublayer outputs linear filter responses, the pooling layer outputs the square root of
    the sum of the squares of its inputs, and therefore, it is known as L2 pooling.

    The system seems pretty Kantian in terms of the amount of a-priori processing structure that must be in place for "unsupervised" learning to get going.

    I'd note in particular the dichotomous alternation of filtering and pooling. Or differentiation and integration. Followed by the third synthesising step of a summating normalisation.

    In doing their best to replicate what brains do, the computer scientists must also build a system that pulls the world apart to construct a meaningful response - one that separates signal from noise ... so far as it is concerned.
  • Marchesk
    4.6k
    You seem to be reifying our abstract description of how computers work.Michael

    Go read any description of artificial neural networks. When they want to get technical, they talk in terms of linear algebra, matrices, and finding the minimum slope for error correction. How the computer actually accomplishes computation is irrelevant.

    To the extent that artificial neural networks function like biological ones, the physical instantiation is irrelevant. But nobody thinks they're exactly the same, only somewhat analogous. And of course the biological details matter for how a brain actually functions.
  • Marchesk
    4.6k
    I'm a realist who argues in favor of direct physiological sensory perception. I'm not sure if I'd say/argue that direct perception requires awareness of that which is being perceived. Awareness requires an attention span of some sort. Bacteria directly perceive. I find no justification for saying that bacteria are aware of anything at all...creativesoul

    I don't know whether philosophers spend much time debating perception in bacteria, but when it comes to human perception, the argument between direct and indirect realists is over whether we are aware of the objects themselves via perception, or something mental instead.

    Is access direct or indirect? Are objects really out there or just mental? Is there anyway we can tell? And to what extent does the mind construct those objects based on categories of thought that aren't necessarily reflected in the structure of the world?
  • Marchesk
    4.6k
    How could we argue that the world is coloured as we “directly experience” it when science assures us it is not?apokrisis

    I just came across scientific direct realism on Internet Encyclopedia of Philosophy. Locke's primary properties, like shape, would be directly perceived, while color would be the means by which we see shape, even though it belongs to our visual system.

    Of course there are other flavors of direct realism that might say something different about color. Some would even be color realists, although I have a hard time seeing how that can be defended. But they do try.
  • Marchesk
    4.6k
    But in fact a choice of "what to see" is already embedded by the fact some human decided to point a camera and post the result to YouTube. The data already carries that implicit perceptual structure.apokrisis

    But you could use a camera stationed anywhere, and see what sort of objects an unsupervised network will learn to categorize.

    And there are autonomous vehicles designed using deep learning techniques. A self driving car needs to be able to handle any situation a human might encounter when driving on the road. Here is a short video for one of those companies working on the challenge:

    https://spectrum.ieee.org/cars-that-think/transportation/self-driving/how-driveai-is-mastering-autonomous-driving-with-deep-learning
  • apokrisis
    6.8k
    Locke's primary properties, like shape, would be directly perceived, while color would be the means by which we see shape, even though it belongs to our visual system.Marchesk

    Yep, that is the line that direct realism tried to defend back in the 17th century. It would give up qualitative sensation and insist on the directness of quantitative sensation.

    But psychological science has obliterated that line - even though I agree most people haven't really noticed.

    Of course there are other flavors of direct realism that might say something different about color. Some would even be color realists, although I have a hard time seeing how that can be defended. But they do try.Marchesk

    Great. You seem persuaded that colour experience is definitely indirect - mentally constructed in some strong sense.

    I can see why you might then protest that the shapes of objects are just self-evident - unprocessed, unvarnished, direct response to what is "out there". It seems - as Locke argued - realism can be secured in some foundational way. A shape is impossible to be misrepresented. It therefore requires no interpretation. Our experience of a shape is unmediated.

    But as I say, psychology has shown just how much interpretation has to take place to "see a shape". The useful sanitary cordon between primary and secondary sensation has evaporated as we've learnt more.
  • apokrisis
    6.8k
    But you could use a camera stationed anywhere, and see what sort of objects an unsupervised network will learn to categorize.Marchesk

    Sure. The AI labs will want to keep improving. But a computer that can actually do human things might have to start off as helpless and needy as a human baby. Would you want to have to wait 20 years for your Apple Mac to grow up enough to be useful?

    It's a two edge thing. Yes, it would be great to identify the minimal "Kantian" hardwiring needed to get a "self-educating machine" started. But then that comes at the cost of having to parent the damn device until it is usefully developed.

    So - philosophically - neural networks are already based on the acceptance that the mind has Kantian structure. Awareness is not direct. So to replicate awareness in any meaningful fashion, it is the mediation - the indirect bit - we have to understand in a practical engineering sense.

    The unsupervised learning is then the flip side of this. To the degree a machine design can learn from an interaction with the world, we are getting down to the deep Kantian structure that makes biology and neurology tick.

    And as Michael points out, step back from the "computer just as fascinated by internet cat videos" nonsense used to hype DeepMind, and you can see just how far the AI labs have to go.

    DeepMind's "reality" actually just is a hex code string, magnetic patterns on a disk. It is forming no picture of the world, and so no sense of self. It is the humans who point and say golly, DeepMind sure loves its YouTube cat clips.

    So I agree it is an interesting experiment to consider. I just draw the opposite conclusion about what it tells us.
  • Michael
    14.4k
    How the computer actually accomplishes computation is irrelevant.Marchesk

    It isn't. You want to talk about it in terms of the computer recognising shapes as we do, and so conclude that shapes are mind-independent things. But a look at the mechanics of computation will show that this is the wrong conclusion to make. The computer simply responds to the magnetic charge on the hard drive.
  • Marchesk
    4.6k
    The computer simply responds to the magnetic charge on the hard drive.Michael

    And if the algorithm in question is using thousands of GPUs or TPUs (tensor processing units) reading from a bunch of solid state drives over a server farm, or being fed data over a network?

    You could argue that a neuron simply responds to an electrical charge from a connected neuron. What does that have to do with perception?
  • Marchesk
    4.6k
    I can see why you might then protest that the shapes of objects are just self-evident - unprocessed, unvarnished, direct response to what is "out there".apokrisis

    The brain has to do be able to recognize a shape somehow. It's not magic, and shapes don't float along on photons into the eyes and travel from there on electrons into the homunculus sitting in the visual cortex.

    A distinction needs to be made between naive realism, where unreflective and unscientific view of seeing the world is like looking out a window onto things. Obviously, that's not how it works. No philosopher is going to defend a totally naive view of vision which involves an object showing up in the mind magically. There has to be a process.

    The question is whether the process of perception creates an intermediary which we are aware of when perceiving, or whether it's merely the mechanics of seeing, hearing, touching, etc.
  • apokrisis
    6.8k
    You could argue that a neuron simply responds to an electrical charge from a connected neuron.Marchesk

    No neuroscientist could accept that simple account. Neurons respond to significant differences in the patterns of connectivity they are feeling. And that can involve thousands of feedback, usually inhibitory, connections from processing levels further up the hierarchy.

    So mostly a neuron is being actively restricted. And that constraint is being exerted from above. The brain is organised to that ideas - expectations, goals, plans - lead the way. The self-centred indirectness is what we see when we actually put individual neurons under the microscope.

    The brain has to do be able to recognize a shape somehow. It's not magic, and shapes don't float along on photons into the eyes and travel from there on electrons into the homunculus sitting in the visual cortex.Marchesk

    Of course. I'm not defending any caricature story here. No need to put these words in my mouth.

    No philosopher is going to defend a totally naive view of vision which involves an object showing up in the mind magically. There has to be a process.Marchesk

    Yep. So now again we must turn to why you insist this is better characterised by "direct" than "indirect".

    If your argument is that the brain has the goal of being "as direct and veridical and uninterpreted as possible", then that is the view I'm rejecting. It is a very poor way to understand the neuroscientific logic at work.

    But as you say, I wouldn't then want to be batting for good old fashioned idealism. We don't just imagine a world that is "not there".

    So I am carefully outlining the semiotic version of indirect realism which gives mediation its proper functional place.

    The question is whether the process of perception creates an intermediary which we are aware of when perceiving, or whether it's merely the mechanics of seeing, hearing, touching, etc.Marchesk

    That is no longer my question as I reject both direct perception and homuncular representation. My approach focuses on how the self arises along with the world in experience.

    The surprise for most is that both these things in fact need to.
  • Marchesk
    4.6k
    No neuroscientist could accept that simple account. Neurons respond to significant differences in the patterns of connectivity they are feeling. And that can involve thousands of feedback, usually inhibitory, connections from processing levels further up the hierarchy.apokrisis

    And no computer scientist is going to say that all an algorithm is doing is reading a magnetic charge.
  • Marchesk
    4.6k
    If your argument is that the brain has the goal of being "as direct and veridical and uninterpreted as possible", then that is the view I'm rejecting. It is a very poor way to understand the neuroscientific logic at work.apokrisis

    Let's make this really, really simple. What is the result of visually perceiving a tree?

    A. Seeing a mental image.

    B. Seeing the tree.

    I'll let your unsupervised neural network categorize the two.
  • creativesoul
    11.6k
    I'm a realist who argues in favor of direct physiological sensory perception. I'm not sure if I'd say/argue that direct perception requires awareness of that which is being perceived. Awareness requires an attention span of some sort. Bacteria directly perceive. I find no justification for saying that bacteria are aware of anything at all...
    — creativesoul

    I don't know whether philosophers spend much time debating perception in bacteria, but when it comes to human perception, the argument between direct and indirect realists is over whether we are aware of the objects themselves via perception, or something mental instead.

    Is access direct or indirect? Are objects really out there or just mental? Is there anyway we can tell? And to what extent does the mind construct those objects based on categories of thought that aren't necessarily reflected in the structure of the world?
    Marchesk

    Seems to me that both sides are wrong for the same reason. They both work from an impoverished language 'game'(linguistic framework).

    The point about bacteria was to highlight some of the impoverishment...

    Just yet another case of the self-imposed bewitchment of inadequate language use.
  • creativesoul
    11.6k
    A pigeon can make the same perceptual discrimination. Human perception is of course linguistically scaffolded and so that takes it to a higher semiotic leve
    — apokrisis

    Pigeon perception is not linguistically scaffolded. They have no concept of "cat".

    You need to sort out the incoherence and/or equivocation in your usage of the term "perception".
    creativesoul

    Something to do with thought/belief I take it? LOL.apokrisis

    You said that pigeons can make the same perceptual discrimination as humans after saying that "...perception involves being able to see "that cat there", a judgement grounded in the matching development of a generalised capacity for categorising the world in terms of the long-run concept of "a cat"."

    The incoherence and/or equivocation is the direct result of self-contradiction. So, yeah... I suppose it does have something to do with thought/belief; particularly the kind that doesn't warrant much more of my attention. Some folk care about coherence and clarity. Others apparently don't.
  • creativesoul
    11.6k


    Hey March. I just want to point something out, just in case you've not noted it. Pay very close attention to how the term "perception" is being used in these discussions. Re-read the thread with that as the aim...
  • Marchesk
    4.6k
    see "that cat there", a judgement grounded in the matching development of a generalised capacity for categorising the world in terms of the long-run concept of "a cat".creativesoul

    The pigeon doesn't understand "the cat" as a cuddly pet or abstract concept, but it can still recognize it, and likely has a similar visual experience to humans.
  • creativesoul
    11.6k
    The pigeon doesn't understand "the cat" as a cuddly pet or abstract concept, but it can still recognize it, and likely has a similar visual experience to humans.Marchesk

    That's the sort of stuff that needs unpacking and/or explained March...
  • Marchesk
    4.6k
    Pay very close attention to how the term "perception" is being used in these discussions.creativesoul

    That's a good point. How do philosophers typically define perception?
  • creativesoul
    11.6k
    As a stand in for all sorts of things from rudimentary seeing and hearing to complex linguistic conceptions...

    Poorly.
  • Marchesk
    4.6k
    As a stand in for all sorts of things from rudimentary seeing and hearing to complex linguistic conceptions...creativesoul

    A perception shouldn't be a synonym for a conception, so there needs to be some differentiating there. And a "cat watching" neural network is only classifying different pixel patterns that match up to what humans recognize as cats with a certain degree of accuracy.
  • creativesoul
    11.6k
    There definitely needs to be some meaningful and clear distinctions drawn and maintained between perception and conception, amongst all sorts of other notions. As it pertains to the 'mental realm', until very recently in historical time the whole of philosophy has been working with impoverished notions of thought as well as all sorts of inadequate dichotomies. But I digress...

    I'm not knowledgable enough regarding how computers work to say much at all regarding that. However, it is my understanding that binary code still underwrites it all. Is that correct?
  • Marchesk
    4.6k
    I'm not knowledgable enough regarding how computers work to say much at all regarding that. However, it is my understanding that binary code still underwrites it all. Is that correct?creativesoul

    It's all binary in that the circuit logic is based on boolean algebra (true/false or 1/0). The instructions a processor carries out are based on combinations of 1s and 0s. But the functionality humans care about is understood at an algorithmic level, because that's what we designed computers to do.

    A trained neural network that recognizes a word would have a vector of positive and negative real numbers representing that word. But nobody really understands what those numbers represent. They're the outcome of training a network to recognize the word "cat" for example (written or auditory depending on the network). They're the different weights and biases of the inputs that make the network recognize "cat".

    Of course those real numbers are stored as bit patterns.
  • creativesoul
    11.6k
    Yeah, that's what I thought. Aren't all lines of binary code basically true 'statements'?
  • creativesoul
    11.6k
    Looks like basic or Q-basic or something similar...

    So, the computer prints "Here kitty, kitty." whenever 'it is shown' a catlike image and it is able to somehow 'match' the image to it's database of what counts as a cat?
  • Marchesk
    4.6k
    I was just showing that code can be false, and that was pseudocode, but I updated with real code from a programming language.

    It doesn't have anything to do with neural networks, just that you can represent false statements in code, and I'm using the unicode character for a cat face, because some programming languages let you use any unicode character.

    Although maybe you meant the code has to be true in the error free sense, although errors can crop up while the code is running, of course.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.