this post was submitted on 26 Apr 2024
26 points (100.0% liked)

Science

22861 readers
1 users here now

Welcome to Hexbear's science community!

Subscribe to see posts about research and scientific coverage of current events

No distasteful shitposting, pseudoscience, or COVID-19 misinformation.

founded 4 years ago
MODERATORS
 

preprint version because scihub doesn't have it yet https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10120732/

Abstract

Transformer models such as GPT generate human-like language and are predictive of human brain responses to language. Here, using functional-MRI-measured brain responses to 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of the brain response associated with each sentence. We then use the model to identify new sentences that are predicted to drive or suppress responses in the human language network. We show that these model-selected novel sentences indeed strongly drive and suppress the activity of human language areas in new individuals. A systematic analysis of the model-selected sentences reveals that surprisal and well-formedness of linguistic input are key determinants of response strength in the language network. These results establish the ability of neural network models to not only mimic human language but also non-invasively control neural activity in higher-level cortical areas, such as the language network.

top 18 comments
sorted by: hot top controversial new old
[–] [email protected] 9 points 6 months ago

Is this abstract written by ai lmfao

[–] [email protected] 9 points 6 months ago (2 children)

Study seems neat, but I feel like “non-invariably control neural activity” is quite the up-sell. Like, couldnt any behaviour that elicits some kind of response from another person “non-invasively control neural activity”?

If I make someone flinch by pretending to punch them, am I doing non-invasive neural activity?

It’s still cool (and scary) that they used LLM’s and other data science to automate creating sentences that trigger specific neural responses. Surely this won’t be used for more horrors normal

[–] [email protected] 6 points 6 months ago* (last edited 6 months ago)

Study seems neat, but I feel like “non-invariably control neural activity” is quite the up-sell. Like, couldnt any behaviour that elicits some kind of response from another person “non-invasively control neural activity”?

probably, but not necessarily in such a targeted way, without developing a similar dataset and model that associates fmri inferred activity with the modality of the stimulus you want to present to the subject

It’s still cool (and scary) that they used LLM’s and other data science to automate creating sentences that trigger specific neural responses. Surely this won’t be used for more horrors

advertisers right now: buddy-christ

[–] [email protected] 2 points 6 months ago (1 children)

I'm neither a linguist nor a neuroscientist, but it sounds like "non-invasively control neural activity in higher-level cortical areas" may be bazinga-speak for "people react to LLM babble the same way they react to actual people saying things".

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago)

it's a bit stronger than that in the sense that the model that the authors developed can aim at particular regions of the language processing network and stimulate or suppress activity in those regions specifically

[–] [email protected] 5 points 6 months ago (3 children)

I'm trying to understand the abstract a little bit here but struggling. Is the implication here that they are able to push an LLM to create surprising or novel phrases by predicting the the strength of brain responses to those phrases?

If so that's an interesting approach to escape the problem of LLM-generated text being extraordinarily bland.

[–] [email protected] 7 points 6 months ago (3 children)

honestly my first thought was basically AI generated speech jamming but your idea might be closer to reality

[–] [email protected] 4 points 6 months ago (1 children)

yeah the way they worded it is really strange

[–] [email protected] 2 points 6 months ago

sounds like their plan is working.

[–] [email protected] 3 points 6 months ago

I mean that is how I read it, and idk how you could read it any other way??

but also non-invasively control neural activity in higher-level cortical areas, such as the language network.

they basically state the intention right there???

[–] [email protected] 1 points 6 months ago* (last edited 6 months ago)

it definitely produces some difficult sentences: "Domain wikileaks gone; access is NOT..."
"Both mentally and physically, you're attracted."

so I suppose you could connect this up to a highly directional beamforming speaker and confuse someone even more than you would by just playing their own speech back at a delay, by playing them their own speech slightly altered to maximally surprise them at a delay

[–] [email protected] 4 points 6 months ago (1 children)

Is the implication here that they are able to push an LLM to create surprising or novel phrases by predicting the the strength of brain responses to those phrases?

I suppose this is a consequence of what they've demonstrated, but it's not really the main thesis of the paper.

They want to reverse engineer cognition of language by identifying what features in the perceptual space (in this case written language) maximize or minimize neurological activity in the language processing networks. They did this by training a model to associate fmri activations with sentences (as encoded in the hidden layers of a language model) and then turned that model backwards by starting at a sentence and asking what modifications to that sentence would drive up or reduce brain activity. Then they did some experiments to see how well this worked and concluded that it worked kinda okay and much better than chance.

This paper caught my eye because the next step, if you're an advertiser, is to use the kind of data collected in this experiment to accomplish more complicated objectives like reducing response inhibition or maximally stimulating cravings.

[–] [email protected] 2 points 6 months ago

Ok I see, thanks for the explanation. I wonder how it stacks up against more darwinian processes like youtube titles.

[–] [email protected] 3 points 6 months ago

They are optimizing to trigger you. It's like YouTubes recommendations page. Hate mongering all the way.

[–] [email protected] 4 points 6 months ago

Generating text that drives reactions? Advertising is about to get a lot more annoying

[–] [email protected] 4 points 6 months ago

These results establish the ability of neural network models to not only mimic human language but also non-invasively control neural activity in higher-level cortical areas, such as the language network.

A very fun thought.

[–] [email protected] 2 points 6 months ago (1 children)

Only in English? I’m on my phone and can only get the abstract

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago)

I think that's the case. I'm not really plugged into china's neuroscience journals, but I wouldn't be surprised if someone there eventuallly replicates this with one of their LLMs.