• TogetherTurtle
    41
    Nope. I don't really know much about Buddhism in general. Maybe two paths have reached the same end? All I know is that we don't see the world exactly as it is. Everything comes together to create a facade. How we examine the world is not the only way to do so, neither is it the most effective. There are many more possible senses than the 5 we have, and they are very easy to trick as it is.
  • Eugenio Ullauri
    6
    Short Answer: Software

    Long Answer:Software and the Materials they are made with.
  • TheMadFool
    2.3k
    All I know is that we don't see the world exactly as it is.TogetherTurtle

    Is this evidential or just a gut feeling?
  • TheMadFool
    2.3k
    All I know is that we don't see the world exactly as it is.TogetherTurtle

    Agreed. I too believe our senses can be deceived or that the picture of the world we create out of them isn't the actual state of affairs. It's like taking a photograph with a camera. We have an image in our hands but it isn't the actual object the image is of.

    Everything comes together to create a facadeTogetherTurtle

    As far as I'm concerned there's a limit to illusion. EVERYTHING can't be an illusion, especially our sense of self. In the basic definition of an illusion we need:
    1. an observer A
    2. a real object x
    3. the image (illusion) of the object x, x1

    I can accept 3 but what is undeniable is the existence of the observer A who experiences the illusion x1 of the real object x.

    Are you saying the observer A itself is an illusion? In what sense?

    In the Buddhist context, the self is an illusion because it lacks any permanent existence. The self, according to Buddhism, is a composite "material" and when decomposed into its parts ceases to exist.
  • TogetherTurtle
    41
    Is this evidential or just a gut feeling?TheMadFool
    It is evidential to some extent. I apologize if I didn't make it clear before, but I don't believe nothing exists. I'm more on the line of thinking that how we view existing objects is arbitrary.

    As far as I'm concerned there's a limit to illusion. EVERYTHING can't be an illusion, especially our sense of self.TheMadFool

    I agree with this. When I said everything, I more meant every way we experience the world. Your sense of hearing, for instance, can be tricked by focused, weak soundwaves. That is what you are experiencing when you put on headphones. While no one else can hear your music or audio book or other media, you hear it like the performer was in the room with you. This of course, is not the case, and other senses verify that. Therefore, it is very possible some things in the natural world go unnoticed because we can't sense them. What we sense is very selective, labeled arbitrarily, and subject to trickery.

    I may in time take interest in the Buddhist view on this subject. For a religion they have a strangely materialistic view on the concept of a soul.
  • aporiap
    91
    Yes. That can be correctly classified as some level of self-awareness. This leads me to believe that most of what we do - walking, talking, thinking - can be replicated in machines (much like wormw or insects). The most difficult part is, I guess, imparting sentience to a machine. How does the brain do that? Of course, that's assuming it's better to have consciousness than not. This is still controversial in my opinion. Self-awareness isn't a necessity for life and I'm not sure if the converse is true or not.
    Hmm, I would think self awareness comes part and parcel with some level of sentience. I think a robot that can sense certain stimuli - etc. light, color, and their spatial distribution in a scene - and can use that information to inform goal directed behavior must have some form of sentience. They must hold some representation of the information in order to manipulate it and use it for goal based computations and they must have some representation of their own goals. All of that (i.e. having a working memory of any sort) presupposes sentience.

    I
  • Heiko
    118
    They must hold some representation of the information in order to manipulate it and use it for goal based computations and they must have some representation of their own goals.aporiap
    The AIs whose construction is inspired by the human brain are merely a bunch of matrices chained together resulting in a map from an input to an output. m(X) = Y. These get trained (in supervised learning at least) by supplying a set of desired (X,Y)-Tuples and using some math. algorithm to tweak the matrices towards producing the right Y values for the Xes. Once the training-sets are handled sufficiently well chances are good it will produce plausible outputs for new Xes.
    The point here is: those things just "work" - not meaning that this works well, but the whole idea of the concept is not to implement specific rules but just train a "black box" that solved the problem.
    Mathematically such AIs separate the input-space by planes, encirceling regions for which certain results are to be produced.
    These things do not exactly have a representation of their goals - they are that representation.
    One cannot exactly forcast how such an AI develops if not stopping alteration of the matrices at some point: The computation that would be needed to do this is basically said development of the AI itself.
  • aporiap
    91
    The AIs whose construction is inspired by the human brain are merely a bunch of matrices chained together resulting in a map from an input to an output. m(X) = Y. These get trained (in supervised learning at least) by supplying a set of desired (X,Y)-Tuples and using some math.
    algorithm to tweak the matrices towards producing the right Y values for the Xes. Once the training-sets are handled sufficiently well chances are good it will produce plausible outputs for new Xes.
    Isn't this true for only a subset of AIs. I'm unsure if this is how, for example a self navigating, walking honda robot works, or the c. elegans worm model, etc. And even in these cases, there is still a self monitoring mechanism at play -- the optimizing algorithm. While 'blind' and not conventionally assumed to involve 'self awareness', I'm saying this counts -- it's a system which monitors itself in order to modify or inform its own output. Fundamentally, the brain is the same just scaled up in the sense that there are multiple self monitoring, self modifying blind mechanisms working in parallel.

    These things do not exactly have a representation of their goals - they are that representation.
    They have algorithms which monitor their goals and their behavior directed toward their goals no? So then they cannot merely be the representation of their goals.
  • Heiko
    118
    Isn't this true for only a subset of AIs. I'm unsure if this is how, for example a self navigating, walking honda robot works, or the c. elegans worm model, etc.aporiap
    Sure there are other methods. But the ones that are derived from the functioning of the human brain, which generally means interconnected neurons passing on signals are usually expressed that way.

    They have algorithms which monitor their goals and their behavior directed toward their goals no?aporiap
    The whole program is written to fulfill a certain purpose. How should it monitor that?
  • aporiap
    91
    Sure there are other methods. But the ones that are derived from the functioning of the human brain, which generally means interconnected neurons passing on signals are usually expressed that way.
    I still think neural networks can be described as self monitoring programs - they modify their output in a goal-directed way in response to input. There must be learning rules operating in which the network takes into account its present state and determines how this state compares to a more optimal state that it's trying to achieve. I think that comparison and learning process is an example of self monitoring and modification.

    The whole program is written to fulfill a certain purpose. How should it monitor that?

    I was wrong to say it monitors its own goals, rather it monitors its own state with respect to its own goals. Still there is a such thing as multi task learning - and forms of AI that can do so can hold representations of goals.
  • wellwisher
    161
    The fundamental difference between computers and the brain are neurons are designed differently from computer memory. Neurons, at rest, are at highest potential. When a neuron fires it lowers potential. Computer memory works the opposite way. At rest, computer memory is at lower potential. This is useful for long term storage.

    If computer memory was designed like neurons, it would not be stable in storage. It would be subject to spontaneous change, as the chemical potential attempts to lower. The brain has a way to deal with this, allowing spontaneous creative change using the laws of physics and chemistry. At the same time, it maintains high energy continuity.

    For example, say we designed a future computer using high energy memory. We would need a backup version of the memory, using traditional low energy memory. We allow the high energy memory to be triggered, so it spontaneously lowers potential. This movement of potential rearranges the furniture, so to speak. We then compare the two memories, to filter out any useful change. We then rewrite the high potential memory back to the starting point, while adding useful changes.

    In this scenario, the change in the high energy memory is not based on computer instructions or software, but on the physical pathways needed to lower chemical potential. This gives the memory liberty to find the best paths, which may not be part of any previous logic; creativity. We continue the cycling, until the pathways reach a steady state; maximizes energy flow.

    Next, we add a secondary high energy memory, that will use the energy change profile of the primary as the trigger to ignite the spontaneous change in the secondary memory. Now we are getting closer to self awareness. The brain does this through well worn ancient genetic pathways in the primary, that trigger a wide range of self feedback; feelings, sensations, emotions, etc.This occurs at the same time it triggers spontaneous change in the secondary.

    The energy flow is based on free energy which is composed of enthalpy and entropy. Free energy has a natural logic, based on the laws of physics, which are universal. This flow does not need manmade language. Although, manmade language impacts how the high energy memory of the secondary moves the potential around. This helps to create a disconnect with the secondary; consciousness. The primary cannot turn the secondary into a clone of itself, due to manmade language. One becomes self aware of the separation while still feeling overlap.
12345Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.