• Wayfarer
    22.5k
    This just in:

  • Outlander
    2.1k
    Neat. It can be used to make enemies of the state give up information/names/locations of stolen goods, persons, or illicit activities. Kidnappers and ransom takers, etc.

    Unfortunately. What some people forget. Is just how the first computer was a massive, costly piece of machinery that took up the entire wall of a decent sized room. We now have them we can wear on our wrists for $50.

    Secret drones the size of a housefly exist. Only a matter of time before they hit the market and become affordable any college kid can get hold of.

    So what happens, if the current pattern continues? Dare you ask? You could potentially offer to give someone a ride and have the roof or seat of your car equipped with non-contact "brain sensors" of this type and bring up in conversation, "Man someone figured out my bank pin. I used all 4's. I was so dumb. Hey. Whatever your bank pin is... make sure it's good." Upon triggering someone manually by such a phrase it will likely come up in their head, unless you're, what is basically crazy, and start thinking random numbers as quickly and feverishly as possible.

    Scary direction indeed. What does this mean for the future of humanity? One could only imagine.
  • wonderer1
    2.2k
    You could potentially offer to give someone a ride and have the roof or seat of your car equipped with non-contact "brain sensors...Outlander

    It is worth noting the 15 hours that subjects spent in a scanner before the AI that was used had sufficient training data on the individual to be able to decode that individual's thoughts. Without the AI having been trained to form correct associations, between a specific individual's brain activity and what the individual was thinking about, the system can't decode thoughts.
  • Outlander
    2.1k


    Well that's somewhat pleasing and comforting to hear. However, like I said. The first million dollar computer that took years of research and took up the size of a room, we now wear on our wrists for little more than the cost of a large pizza. Took 50 years to make that progress. And since then, further progress seems to come so quickly it has become superfluous. So, you see the concern and validity of my argument I'm sure.
  • wonderer1
    2.2k
    So, you see the concern and validity of my argument I'm sure.Outlander

    Sure, just pointing out that we don't need tinfoil hats just yet. :wink:
  • Wayfarer
    22.5k
    Without the AI having been trained to form correct associations, between a specific individual's brain activity and what the individual was thinking about, the system can't decode thoughts.wonderer1

    :up: Important point. I hadn't picked up the specificity on first listening.
  • Wayfarer
    22.5k
    The first million dollar computer that took years of research and took up the size of a room, we now wear on our wrists for little more than the cost of a large pizza.Outlander

    I read not long ago that there is more computing power in a singing christmas card than existed in the world in 1946.
  • Wayfarer
    22.5k
    On a more serious note, I think the video does a fairly balanced job of conveying concerns about what exploitations of this kind of technology could do. I suppose one way of thinking about it is asking whether the risks involved in such technologies are exacerbated by seeking to exploit them for commercial gain. As is well known, the kinds of multiplier effects that have been witnessed with the growth of social media and internet search have given rise to vast fortunes for companies including Alphabet and Meta, and many others. But on the other hand the pursuit of profit may not be a particularly sound motivation when it comes to researching this kind of technology - as the producer suggests. He says that the research and possible scientific applications are one thing, but that 'productizing' it is another matter entirely.

    On a side note, both the founders of OpenAI, Sam Altman and Greg Brockman, were both sacked by the board, out of the blue, last Friday. It seems to have taken everyone by surprise (gift link to NY Times analysis.) The conflict inside OpenAI also seems to be, at least in part, about the dangers of commercialisation.

    As to the philosophical implications, they are indeed fascinating, but I want to resist the inevitable suggestion that we've 'figured out how the mind operates'. As noted already, the system requires extensive sychronisation with a specific subject in order to be effective. (I also picked up a way in which the predictive power of the algorithm to complete sentences is modelled on Shannon's theory.) And last but not least, the system is imbued with whatever power it has by scientific expertise and insights.
  • punos
    561


    The breakthrough part is to do with the use of AI to do the decoding, but the ability to decode has been around for a while now. I remember reading what seems like ages ago about some Japanese scientists that were able to record dream imagery using fMRI.

    About Sam Altman getting fired, i wonder if it has anything to do with government and national security, since AI is considered to be as, or more dangerous than nuclear weapons. If OpenAI has achieved AGI or is very close to it then the government might feel the need to step in to take control of the situation in a secretive way. Imagine the government allowing corporations to develop nuclear weapons if they wanted to. I heard that Joe Biden recently saw the last Mission Impossible movie and got spooked by the film's AI villain, along with the recent AI safety executive order he put out seems like a non-zero probability.
  • Wayfarer
    22.5k
    There's about a couple of dozen plausible movie plots right there, sci-fi, espionage and end-of-world scenarios, writers will be able to take their pick.
  • RogueAI
    2.8k
    Unfortunately. What some people forget. Is just how the first computer was a massive, costly piece of machinery that took up the entire wall of a decent sized room. We now have them we can wear on our wrists for $50.Outlander

    There probably is a limit to how far from a person's skull the sensors can be, no matter how good they are. For example, past a certain point, radio signals from Earth become unextractable, no matter how good a radio telescope. I think that no matter how advanced the tech gets, the sensors will have to be close to the head, and there will have to be at least a couple of them.

    Also, while it's true we have computers on our wrists, I don't think we're ever going to have quantum computers on our wrists, or table-top gravitational wave detectors. Some things are really hard to miniaturize, and I'm betting this is one of them.
  • Wayfarer
    22.5k
    Have you signed up and actually used ChatGPT yet? It must be a year since it came out….quick google….Nov 30th 2022…and I’ve been bouncing ideas of it since Day 1. It’s really quite incredible - not all knowing, not perfect, but still totally amazing.

    As for quantum computers, that’s another matter altogether, and one I’m highly sceptical about, but that’s for another thread.
  • RogueAI
    2.8k
    Yeah, I played around with it a lot when I first discovered it.
  • wonderer1
    2.2k
    Some things are really hard to miniaturize, and I'm betting this is one of them.RogueAI

    Yeah, for now at least, you need a specially shielded room to do MEG.
  • wonderer1
    2.2k
    Here's a link I posted awhile back in The Post Linguistic Turn thread, which discusses the original research discussed in the OP video. From that earlier link:

    Decoding worked only with cooperative participants who had participated willingly in training the decoder. If the decoder had not been trained, results were unintelligible, and if participants on whom the decoder had been trained later resisted or thought other thoughts, results were also unusable.
  • RogueAI
    2.8k
    Also, I wonder what kind of jamming hats people could wear to thwart it?
  • wonderer1
    2.2k
    Also, I wonder what kind of jamming hats people could wear to thwart it?RogueAI

    I suspect carrying a cellphone around might be sufficient, (or could be made sufficient). However for those who prefer a lower tech solution, that link I posted says:

    Since the magnetic signals emitted by the brain are on the order of a few femtoteslas, shielding from external magnetic signals, including the Earth's magnetic field, is necessary. Appropriate magnetic shielding can be obtained by constructing rooms made of aluminium and mu-metal for reducing high-frequency and low-frequency noise, respectively.

    So layered Mu-metal and aluminum would do the job. Mu-metal is nice and shiny and corrosion resistant, so you could be stylin.
  • Alkis Piskas
    2.1k

    Quite impressive as a technology. Yet, I would expect at least one example of how it really works. That is, a subject thinking of something --just an image, as the apple we've seen-- and the FMRI system recognizing and naming or reproducing that image. Well, I saw nothing of the sort.

    Transferring my thoughts to a machine and seeing them on a screen was always one of my wildest dreams. Reality though steps always in and stops me.

    Thoughts are not physical in nature. The brain only receives signals of how the person reacts to his thoughts, i.e. the effect these thoughts have on the body. (I could explain how this works, but not here of course.)
    The brain is only a stimulus-response mechanism. It receives and sends signals. That's all. Information may be stored in tissues or neurons, but always as signals. Now just imagine what a task would be to identify those signals among billions and name them, connect them as "words" and then "phrases" and so on. It would be as if we try to assemble millions of pixels in order to form an image we don't even know what it is about. And we would have to do that in a total darkness. It's not like in jigsaw puzzles, where we are given the image we are trying to solve and in plain light.
  • Wayfarer
    22.5k
    That is, a subject thinking of something --just an image, as the apple we've seen-- and the FMRI system recognizing and naming or reproducing that image. Well, I saw nothing of the sort.Alkis Piskas

    Watch again. There is a sequence about exactly that at around 12:14 with about 3-4 examples (cat, train, surfer, etc.)

    Thoughts are not physical in natureAlkis Piskas

    I think the argument can be made that there is a physical aspect to them. What is not physical is insight, grasping the relations between ideas, and understanding meaning.
  • Alkis Piskas
    2.1k
    There is a sequence about exactly that at around 12:14 with about 3-4 examples (cat, train, surfer, etc.)Wayfarer
    Yes, I saw that. It is what AI art-generators do based on text prompts. This must be from DALL.E 3, one of the best ones. (I have not personally tried with it but I have seen samples.) And since this can be done from text, it must also be done from speech, using a speech-to-text converter. Indeed, at some point I saw a subject moving his mouth, like murmuring or something.
    Anyway, it is quite impressive as I said.
    BTW, I just googled < project thoughts on screen > and got ... 396,000,000 results. (Of course these numbers are never exact, but they are quite indicative of the popularity od a subject.) I read a couple of them and these kind of projects show only a possibility. As far as fMRI esp. is conceened, it is only a possibility in the future. So, let's see what the future has reserved for us ... :smile:

    I think the argument can be made that there is a physical aspect to them. What is not physical is insight, grasping the relations between ideas, and understanding meaning.Wayfarer
    Well, they consist of energy and mass, but not of the kind we know in Physics. Yet, this energy and mass can be detected with special devices, e.g. polygraphs. (I have used such a device myself extensively. Not a polygraph.)
    This detection is possibe because thoughts affect the body, as I already said. And in this way, we can have indications about the kind of thoughts the subject has --from very "light" to quite "heavy", their regular or irregular flow, their abrupt changes, etc.-- but not of course of their content.
  • Wayfarer
    22.5k
    Yes, I saw that. It is what AI art-generators do based on text promptsAlkis Piskas

    You don't understand it, then. The rendered images were not from text prompts, they were from brain scans. There was no other input than a subject with electrodes attached to their cranium.
  • Alkis Piskas
    2.1k

    I didn't say that they use AI art-generators, neither that the fMRI gets text prompts, for godssake.
    I said that the produced images look like those produced by AI art-generators based on text prompts. Huge difference.

    But this is of secondary importance. You chose to stick on that instead of what I said it is of most importance.

    Well, whatever. Just keep believing that fMRI can read thoughts ...
  • Wayfarer
    22.5k
    And since this can be done from text, it must also be done from speech, using a speech-to-text converterAlkis Piskas

    Nope. Brainwaves. I know, hard to believe, but there it is.
  • Alkis Piskas
    2.1k


    From https://www.sciencedirect.com/topics/agricultural-and-biological-sciences/brain-waves:
    ************************************************************************************************************
    Brain waves are oscillating electrical voltages in the brain measuring just a few millionths of a volt. There are five widely recognized brain waves, and the main frequencies of human EEG waves are listed in Table 2.1 along with their characteristics.

    Table 2.1. Characteristics of the Five Basic Brain Waves
    Frequency band  Frequency  Brain states
    --------------  ---------  -----------------------------------------------------
    Gamma (γ)          >35 Hz  Concentration
    Beta  (β)        12–35 Hz  Anxiety dominant, active, external attention, relaxed
    Alpha (α)         8–12 Hz  Very relaxed, passive attention
    Theta (θ)          4–8 Hz  Deeply relaxed, inward focused
    Delta (δ)        0.5–4 Hz  Sleep
    
    ************************************************************************************************************

    Do you still believe that brain waves can be used to detect the content of thoughts, like images?
  • Wayfarer
    22.5k
    That’s what the youtube video is claiming. I’m not saying you have to believe it.

    Incidentally the channel, Cold Fusion TV, produces generally pretty good quality mini-documentaries on a variety of tech and business products.
  • Alkis Piskas
    2.1k
    That’s what the youtube video is claiming. I’m not saying you have to believe it.Wayfarer
    That's better!

    Incidentally the channel, Cold Fusion TV, produces generally pretty good quality mini-documentaries on a variety of tech and business products.Wayfarer
    I don't doubt. But you also have to look at what a lot of other sources have to say on the subject. (Again, 396,000,000 Google results!)

    If some technology were even just close to being able to identify images from thoughts, such a thing would have revolutionized science --esp. psychiatry and psychology-- and the whole planet would have heard about it.
  • wonderer1
    2.2k
    The technology used is not fMRI or EEG.

    Magnetoencephalography (MEG) is a functional neuroimaging technique for mapping brain activity by recording magnetic fields produced by electrical currents occurring naturally in the brain, using very sensitive magnetometers. Arrays of SQUIDs (superconducting quantum interference devices) are currently the most common magnetometer, while the SERF (spin exchange relaxation-free) magnetometer is being investigated for future machines.[1][2] Applications of MEG include basic research into perceptual and cognitive brain processes, localizing regions affected by pathology before surgical removal, determining the function of various parts of the brain, and neurofeedback. This can be applied in a clinical setting to find locations of abnormalities as well as in an experimental setting to simply measure brain activity.
  • Alkis Piskas
    2.1k

    Wayfarer, I feel somwhat bad because in some way I run against your enthusiam regarding this indeed impressive video. Unfortunately, it happens that I know well a few things that make mind-reading impossible on a content basis. But certainly I cannot exclude that it can happen in one way or another in the future. It all depends on the means one is using. And there are a lot of alternative methods in achieving such a goal.

    As for Meta's technology and this video, I did a small research on the subject "Mind Reading using fMRI" (w/o quotes) restricting the period to "Last month" Meta's experiments appeared in only two articles in the first 60. (I didn't read their content.) And when I restricted the period to "Last week" --which just covers the date of the video, which was posted 1-2 days ago-- no such articles appeared. (You can verify that yourself.)
    Don't you find that a little strange?

    I really wish to be proved wrong and that Meta's or other technology to make my dream come true and your topic to be proved prophetic!
  • Joshs
    5.7k



    Nope. Brainwaves. I know, hard to believe, but there it is.Wayfarer

    Do you still believe that brain waves can be used to detect the content of thoughts, like images?Alkis Piskas

    The issue isn’t whether machines can read thought via detecting brain waves, but what kind of thinking is involved.
    We know that implanted electodes can detect neural
    signals in limbs and translate them into controllable prosthetics. This is a primitive form of ‘reading’ neural waves. One could imagine teaching someone with locked-in syndrome morse-code, and implanting electrodes strategically in a part of the brain whose activity is specifically and narrowly correlated with thinking of the pattern of dots and dashes. In this way one could decipher language before it is spoken.

    At the other end of the spectrum are devices
    that read the combined output of massive numbers of neurons deep in the neocortex when persons are thinking in various ways. This kind of conceptual thought, which has not yet been processed by the person into discrete words symbols, tends to be what we think of interns of mind reading, but no device has yet been able to decipher these highly complex patterns of neural firing. It sounds to me what the fMRI in the video is doing is targeting areas of the brain somewhere between the morse code example and pre-verbal thought. Once one has in mind a robustly formed verbal concept or image, then the neural measuring equipment can locate consistent neural patterns that correspond to words that are being finalized by the brain in preparation for communication via speech or gesture.
  • Alkis Piskas
    2.1k
    The issue isn’t whether machines can read thought via detecting brain waves, but what kind of thinking is involved.Joshs
    Right.

    Excellent description of how the brain works regarding thoughts/thinking and what possibilities exist for mind-reading. :up:
    (It fills gaps in my knowledge of the subject, which I never felt the need to fill in myself. :smile:)
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment