• Prishon
    984
    As a follow-up question about patterns I like to ask the next question. In AI, neural network systems do a good job in recognize patterns. I'm not sure how good but better than normal computers. The job is done best by our brain. That's why we are, on the net, frequently asked to enter which numbers and letters we can discover before having acces.

    What makes it so difficult for an ordinary computer (and relatively easy for a neural network AI or the brain) to recognize a pattern?
  • Josh Alfred
    226
    I am not a computer scientist but I would think that what is behind pattern recognition is to be considered a hardware problem, i.e. pattern recognition is a "natural capacity" of the brain to compute.

    Pattern recognition is a sense of repetition. When variables repeat patterns form. I don't understand how that is sensed, though.

    I think a computer scientist could explain this with far more depth.
  • Prishon
    984
    Pattern recognition is a sense of repetitionJosh Alfred

    What do you mean by this? Can you give an example?
  • TheMadFool
    13.8k
    We can never be sure that there's no pattern:

    1. xxxxx...obvious pattern: x repeats
    2. xy...no pattern but wait a bit and it might be xyxyxy
    3. str...no pattern but it could be strstrstr
    4. pktq...no pattern but possible that pktqpktqpktq
    .
    .
    .
    No matter how patternless something appears to be, it might actually possess one, you just have to wait. How long? I have no idea.
  • Prishon
    984


    Thanks. Goodday!
  • khaled
    3.5k
    The question has a type error.

    Neural network systems are RUN on computers. AI are programs, not devices. This is like asking “What makes it so difficult for an ordinary phone to text when a texting app does it so much easier”
  • Prishon
    984
    Neural network systems are RUN on computers. AI are programs, not devices. This is like asking “What makes it so difficult for an ordinary phone to text when a texting app does it so much easier”
    2mReply
    khaled

    Are thers no actual virtual neurons involved?
  • Prishon
    984
    The question has a type errorkhaled

    I wont call that a typo. Its a coneptual error.
  • Josh Alfred
    226
    I think madfool provided some examples, and at the same time pushed us into bewilderment by noting "some series of variables can always repeat but we don't know when." Pattern recognition works easier when it is limited to a number of repeating sequences. I thought about your question for some time this week. xyxyxy=input xy = output is a recognition of pattern. However, visual cognition in AI can detect xy after only having it input once, such that input = xy output = xy. Very linear compared to pattern recognition which requires vast amounts of mathematics (which I have no notion of). I will ponder from from here on out. If you have any questions I will most likely come back to answer them, if I do repeat the initial behavior of visiting this site. :)
  • TheMadFool
    13.8k


    I'm not as certain about this as I'd like to be but brains are generic in their pattern recognition ability i.e. they can detect patterns in all sensory modalities (sight, touch, smell, sound, taste) but neural networks seem sense-specific i.e. each neural network can analyze patterns in only one of the five sensory modalities and from what I know most neural networks developed till date are visually-oriented. Ordinary computers are not designed to run pattern recognition software/programs and so fail in this regard.

    Thus we see a gradation in pattern recognition ability: Ordinary computer (can't recognize patterns) -> Neural networks (can sense patterns but sense-specific) -> Brains (can perceive all kinds of patterns).
    w
    Perhaps it's got do with the architecture of a computer/neural network/brain that makes pattern recongition software compatible/incompatible.
  • boethius
    2.2k
    As points out, current AI are algorithms run on normal computers. Specialized AI devices exist, but they run the same algorithms as a normal computer can, just the hardware is optimized to do the math an AI algorithm usually needs.

    So with that distinction, the question is between these AI (machine learning) algorithms, our brains and "normal programming".

    We can understand "normal programming" as code that is static: the programmer writes the code, and that's it; it then executes and does it's thing. Any updates to the code, the programmer needs go in and write those updates.

    Machine learning algorithms have the same "static" phase of code development above to create the framework, but then the "recognition" algorithm (that will do the pattern spotting) is "trained" on the data (things with their associated labels the algorithm is supposed to spot more generally) which basically means the algorithm is changed (by another algorithm) to be better and better with more and more data (if the data is good).

    However, the premise of your question is also wrong, normal programming on normal computers can spot plenty of patterns better than us.

    In any sort of structured data, where data points maybe related by mathematical functions, a normal computer with normal algorithms is going to do a better job at spotting patterns for a wide range of patterns and data sets (data sets can simply be unfeasible large to go through, even if the patterns are simple) better than us just looking at the raw data.

    A super simple example, a spreadsheet is going to be able to spot the pattern of "the sum of these entries" much faster and more accurately than just sitting there and looking at the list of entries.

    Another example, if you're looking at the raw data of all phone calls in a country, spotting any useful pattern will be exceedingly difficult. However, normal computers with normal algorithms can spot all sorts of useful patterns you maybe interested in.

    Likewise, if you're trying to find the pattern of "normal text" that is encrypted, just looking at the encrypted text is unlikely to help, but totally normal computer algorithms exist that can find such patterns (if there's a weakness in the encryption somewhere).

    So, in terms of pattern recognition in general, there are some things we're good at and some things a normal computer algorithm is good at and some things a AI machine learning algorithm is good at (again, AI algorithms can be applied to large datasets we cannot feasibly do anything with).

    Why we're good at the pattern recognition we're good at, has the simple answer of literally billions years of evolution (maybe longer in the pan spermia hypothesis is true).

    Why computers are not better than us at absolutely everything, one answer is that computers are built by us, so inherit our weaknesses. Another answer is that billions of years of evolution may have created some optimum algorithms in its domain that can never be beat (when our energy consumption if factored in, even more so if compare to a device that must consume raw chemical energy and convert it to electricity, we can still vastly out compete computers and robots on many tasks; indeed, it would be interesting to see a competition with our best chess and go players on the same source of energy over the course of the match; which is easy to simulate by just constraining the electricity to work with, but could be fun student project or something to build a whole device that runs on food and plays chess). Another more technical answer is that computers do not actually have abstractions; everything is just a variable, and all variables in the computer are the same "thing", just a string of binary. So, simply calling a variable "a tree" does not create these sort of abstractions in the computer itself; the binary that encodes "tree" means nothing special to the computer and the binary that stores the memory location of whatever the value for the variable "tree" has, means nothing special to the computer (it's only us that have the idea the variable "tree" corresponds to representing our abstract notion of tree); so, in this view, it's not a surprise that the more try to depart from simple number crunching and get computers to solve abstract problems (such as write the next Harry Potter), the more it becomes fitting square pegs in round holes (where the pegs are larger than the holes, just so we're clear on that; otherwise, I've used this trick plenty of times, works like a charm; of course, you can't fit larger pegs into smaller holes anyways, even if they're both round, so the expression should really focus on the size and not shape). What "stuff" are our abstractions made of, we don't really know, so it's a bit expected we have a hard time recreating something we don't even understand to begin with.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.