this post was submitted on 17 Jul 2024
65 points (100.0% liked)

Science

12995 readers
61 users here now

Studies, research findings, and interesting tidbits from the ever-expanding scientific world.

Subcommunities on Beehaw:


Be sure to also check out these other Fediverse science communities:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
all 18 comments
sorted by: hot top controversial new old
[–] [email protected] 16 points 3 months ago (3 children)

Unless there has been serious effort in addressing the poor quality of fmri studies since the dead fish paper I would recommend cautious outlook.

article and link to fish study: https://law.stanford.edu/2009/09/18/what-a-dead-salmon-reminds-us-about-fmri-analysis/

[–] [email protected] 7 points 3 months ago (1 children)

Completely agreed, which is why it's promising that they're looking for patterns rather than specific areas of activation and they are pairing up findings with treatment and using statistics to see if certain treatment modalities work better for certain broad patterns.

[–] [email protected] 3 points 3 months ago (1 children)

Is it though? Isn't that more vulnerable to p-hacking and it's kindred. I lack the expertise to make much of the paper, I'm just pretty disappointed with neuropsych as a field :P Data on depression treatment success are already noisy as fuck and in replication hell, classifying noisy as fuck data from fmri into broad patterns seems challenging in a repeatable fashion.

I guess we'll find out in time if this replicates, if anyone even tries to do that.

[–] [email protected] 4 points 3 months ago (1 children)

Great thought process! Yes, fMRI imaging is very vulnerable to p-hacking, which is more or less what the dead fish paper is pointing out (even when properly calibrated, it's a problem with how noisy the raw data is in the first place). By classifying broad patterns, however, you eliminate some of the noise that the dead fish paper is showing can be problematic by abstracting away from whether micro structures meet statistical probability for being activation and move that to the more macro. While the dead fish paper may have shown activity in specific areas, if you were then to look at activity across larger portions or the entire brain, you would detect no statistical difference with rest (or dead fish, in this case).

Furthermore, this study doesn't stop there- it asks the question of whether these groupings tell us anything about these groups with regards to treatment. Each group is split up into subgroups based on treatment modality. These different treatments (therapy, drugs, etc.) are compared from group to group to see if any of these broad groupings by the fMRI machine make any kind of clinical sense. If the fMRI grouping was complete bogus and p-hacked, the treatment groups would show no difference between each other. This two step process ensures that bogus groups and groups which do not have any difference in clinical treatment outcomes are lost along the way via statistical rigor.

[–] [email protected] 3 points 3 months ago

fair fair. I assume the group is probably planning to run a more interventionist study to see if the results hold when you run time forward.

It'll be good news if it works (maybe, I do worry we're going towards a brave new world style future where disquiet with the status quo is pathologised and medicated away. stunting criticism) but I won't go bat for it yet.

[–] [email protected] 4 points 3 months ago

researchers scanned a dead fish while it was “shown a series of photographs depicting human individuals in social situations. The salmon was asked to determine what emotion the individual in the photo must have been experiencing.”

The work is, however, a compelling and humorous demonstration of the problem of multiple comparisons. This is a principle in statistics that basically says when you’re looking at enough bits of information (i.e. doing lots of statistical tests), some will seem to be what you’re looking for – purely by chance. In fMRI experiments, there are a LOT of pieces of data to compare, and without statistical correction for this phenomenon (which is not always done), some will indeed be significant, just by chance.

[–] [email protected] 3 points 3 months ago (1 children)

Yep this. I knew people in FMRI research about 5-10 years ago, and the word was that everything before then may be all wrong or unreliable. The field was yet to get a grip on what it was doing.

And even then, I couldn’t help but be suspicious at what I saw in the nature of the field. It seemed very opportunistic and cavalier about how wonderfully easy it was for them to gather large amounts of data and perform all sorts of analysis. My bias being that I was more of a wet lab person envious of how easy their work seemed. But still it all seemed like a way too comfortable stretch.

[–] [email protected] 2 points 3 months ago (1 children)

I never actually got through my PhD and it was in physics anyway but yeah. It always seemed to me that the messier fields had these New Exciting Techniques (TM) where you could vacuum up absolutely insane amounts of data and then play with stats till it showed what you wanted.

I don't want to be like "Hur der they're doing it wrong". Studying anything to do with biology necessarily means you're stuck with systems with trillions of variables and you have the awful problem of trying to design experiments where they hopefully average into background. I just thing that, consequently, until stuff replicates a few times (which, unfortunately, is almost never done because it's not sexy. Anyway often papers are written so badly, and the universe so gloriously subtle, even mechanistic stuff like synthesis is a struggle to replicate) big headlines are irresponsible.

[–] [email protected] 2 points 3 months ago

Yep. On top of the complexity of biology, which is real, there are real scientific issues with presuming we know anything at all with the brain and hire its observable qualities relate to our psychology. It’s just way too slippery a phenomenon (the mind) and system (the brain) to be remotely comfortable scooping and analysing piles of data.

[–] [email protected] 9 points 3 months ago* (last edited 3 months ago)
[–] [email protected] 8 points 3 months ago (1 children)

Around 30% of people with depression have what’s known as treatment-resistant depression, meaning multiple kinds of medication or therapy have failed to improve their symptoms. And for up to two-thirds of people with depression, treatment fails to fully reverse their symptoms to healthy levels.

Depressingly big numbers

[–] [email protected] 9 points 3 months ago* (last edited 3 months ago)

I'm in that bin and I'd say that a significant portion of my symptoms could be alleviated by making the world less of a horror show.

We've made society inequitable to the extent that if you go outside you will see heartbreaking tragedy, you have to pay to exist basically anywhere and consequently you have zero freedom, basically all of life is dictatorial, your mind is constantly under assault by ads/propaganda, family and friends are destroyed by forcing everyone to move around to find work/shelter, and you're constantly like 3 bad months from losing it all.

TBH I'm shocked at how few people are depressed.

[–] [email protected] 3 points 3 months ago* (last edited 3 months ago)

Iirc fmri tech is not quite there. But if they can get it right, even 50% of the time, it'll be better than what we have now and worth further study resources.

Edit: I see now that there's a whole discussion thread about how unreliable fmri is. Should have read before commenting.