145
top 50 comments
sorted by: hot top new old
[-] varmint@hexbear.net 98 points 3 days ago

This is the kind of stuff that convinces me that Western academia is about to slam into a brick wall and die

[-] Horse@lemmygrad.ml 54 points 3 days ago
[-] aqwxcvbnji@hexbear.net 12 points 2 days ago

No, learning things in school and doing scientific research is good. Letting that get destroyed by a couple of Silicon Valley olicharchs is bad.

Obviously the Byzantine admission system and absurd tuition fees in the US (and UK) are horrific, but that's not what's being destroyed here.

[-] haxboar@hexbear.net 12 points 2 days ago* (last edited 2 days ago)

I felt that way when I was 18, and knew more about certain topics than my professors did because I had the internet. Also, I remember realising that education was more about tolerating bureacracy than actually knowing material, when I was 10.

Sheesh, the US education system stucks

load more comments (1 replies)
[-] varmint@hexbear.net 82 points 3 days ago

We're witnessing the death of academia in real time. Knowledge acquisition will cease and we will descend into a pit of regurgitated slurry until this system collapses

[-] InevitableSwing@hexbear.net 39 points 3 days ago

we will descend into a pit of regurgitated slurry until this system collapses.

I guess that's this century in a nutshell.

[-] Blakey@hexbear.net 26 points 3 days ago

It kinda needs to happen in a lot of ways. I like academia on, like, a conceptual level, but "publish or perish" and the reproducibility crisis are imo signs of a deeply entrenched problem and I am not convinced it can be solved by reform. The breakdown of liberal academia is probably as inevitable and necessary as the breakdown of capitalism and liberalism.

[-] Collatz_problem@hexbear.net 13 points 3 days ago

LLMs would just make the reproducibility crisis much worse.

load more comments (1 replies)
[-] umbrella@lemmy.ml 14 points 3 days ago

mmmmmm regurgitated slurry

[-] volcel_olive_oil@hexbear.net 72 points 3 days ago

spent so much time trying to make the computer learn things they forgot how humans learn things

this is part of "everyone is twelve". very serious academics going "this is fantastic. I can skip eight weeks of school!"

[-] facow@hexbear.net 40 points 3 days ago

Cargo cult behavior. Churn out 50 slop papers you maybe skim over and no one else reads or attempts to replicate. Feed the slop back into the slop machine to shit out a thesis. Congrats you've got your doctorate without learning anything or generating anything of value!

[-] Le_Wokisme@hexbear.net 22 points 3 days ago

there's a reproducibility crisis in several fields and you don't get money for publishing negative results

[-] CupcakeOfSpice@hexbear.net 9 points 2 days ago

That's what really gets me! I see the Grammarly commercials where they say they can just follow the AI to improve/write their papers and get the grade they want. Cool, but have you considered the grade isn't the end goal? Like, maybe the assignment was to teach you something and by not learning it you have harmed your studies? Maybe getting a lower grade and some feedback would assist you?

[-] EveningCicada@hexbear.net 71 points 3 days ago

galaxy-brain I'm coming up with 500 theses every hour and they're all wrong

[-] InevitableSwing@hexbear.net 41 points 3 days ago

Just keep prompting. You'll get there.

[-] SuperZutsuki@hexbear.net 34 points 3 days ago* (last edited 3 days ago)

But who's going to tell me when it's right? Maybe I'll have grok check Claude's work... thinking-about-it

[-] InevitableSwing@hexbear.net 23 points 3 days ago

The AI Centipede

[-] Kumikommunism@hexbear.net 60 points 3 days ago

There is something very funny about sociology research being written by the stolen words of m/billions of people being smashed together. It's almost avant garde.

[-] reaper_cushions@hexbear.net 37 points 3 days ago

I recently tried using an LLM to find out whether a niche issue in my thesis had already been discussed in the literature. I fed the LLM extremely specific prompts, specific enough, in fact, that it could actually cough up a result that looked similar enough to my problem that I first thought that it had actually found literature on my question. The problem: the literature either did not exist, even though the authors it was attributed to are contributors to my field, or it does exist but does not contain the answer the LLM gave. I know because I had read literally every paper the LLM spat out that actually exists. These machines are ok at some simple tasks like giving a general overview over the current literature in a field, but miserably fail anything more specific than that.

[-] UmbraVivi@hexbear.net 13 points 2 days ago

The way I think about it is: The more frequently the correct answer to a question has been given on the internet, the more reliable an LLM is to give that correct answer to that question. So it's pretty reliable on surface-level questions in a vast array of fields. But the more specific and niche you get, the less explored the topic you're asking the LLM about is, the more likely it is to just make stuff up.

load more comments (1 replies)
[-] Moidialectica@hexbear.net 22 points 3 days ago

Trust me, it's like this for every field; geology, programming, history, story writing, philosophy

I have made use of it, I do regularly use it, but to not acknowledge it's fucking shit and should not be put near any serious work without the up-most scrutiny is a joke

And I believe the propagators of AI lack either the skills needed to actually tell how bad it is, or want to believe otherwise because it makes it so much easier for them

load more comments (1 replies)
[-] red_giant@hexbear.net 8 points 2 days ago

LLMs are a remarkable improvement on googles “I’m feeling lucky” button

[-] BodyBySisyphus@hexbear.net 43 points 3 days ago

Looking forward to the coming retraction because it turns out your interview coding was nondeterministic and your results are not reproducible.

...somebody's out there trying to see if research is reproducible, right? anakin-padme-2

...papers will get pulled from LLM training sets when they get retracted, right? anakin-padme-2

...there isn't a massive number of social sciences papers already published that are basically useless because their results aren't meaningful outside of a narrow set of subjectively specified predictor variables, right? anakin-padme-2

[-] BodyBySisyphus@hexbear.net 25 points 3 days ago

Also holy hell, is this what a vibe-coded website looks like? https://www.shrutimishra.co/

[-] OgdenTO@hexbear.net 18 points 3 days ago

Hey Claude, make me a terrible website

load more comments (1 replies)
load more comments (8 replies)
[-] bdonvr 15 points 3 days ago

somebody's out there trying to see if research is reproducible, right?

Claude says it looks reproducible. Claude, write a paper confirming...

[-] Damarcusart@hexbear.net 27 points 3 days ago

Ah yes, why bother learning all that pesky "medical knowledge" when training to become a doctor, when you can just get an AI to do all the work for you! I'm sure this sort of attitude will have no real world repercussions!

[-] red_giant@hexbear.net 10 points 2 days ago

Congratulations on spending $200,000 at Harvard and completing your PhD. Unfortunately you learned literally nothing.

[-] robador51@lemmy.ml 14 points 2 days ago

I work in an environment where persuasion and synthesis of vast amounts of information gives a major edge. I see 2 types of people. There's those who are actually really good at what they do without help of LLM's who can benefit by making their output even better by use of AI, by honing and optimizing their work, and there's those who are absolutely shit without use of LLM's who're even worse once they start using it.

Unfortunately the latter group is the vast majority.

The first group already has strong ideas, and then the LLM can accelerate and elevate their thinking. They use it as a brainstorm helper. They validate the output. They don't necesarrily work faster.

The second group doesn't know what to do, will ask the LLM, trust the output with little to no scrutiny. They use it as a means of production. They deliver fast.

I think this pattern we see in most fields. Software development for example. A true senior developer might be able to create better output, or produce things a bit faster even. But a bad programmer will still have bad output, and probably exponentially so when they lean more into the tool.

The second group is dangerous. They're as delusional as the output the LLM's tend to generate. They feel empowered, and see the increase in output as a personal victory, as if it unlocked some lingering quality in them that was always there. Qualities that highly capable people had to work for years for to attain. Look how productive I am, look at what I did, they'll think. They create the noise that capable people have to now deal with, it's all the slop we see, and it's everywhere.

That's what I hate about it.

Anyway

[-] Big@hexbear.net 34 points 3 days ago

At this point, the only way to save higher learning is to go back to exclusively oral teaching.

Turns out Socrates was right all along.

[-] Inui@hexbear.net 31 points 3 days ago* (last edited 3 days ago)

A lot of professors I know are pivoting back to hand written proctored exams, oral presentations/q&s, etc because there's really no stopping the slop machine. A lot of professors are uncomfortable with doing something like reporting tons of students for cheating since you can't prove it easily, so that's their alternative.

Except one CS professor I know who failed 30% of his class on an exam, reported them all to student conduct, and sent the rest of the class a warning lol. He ain't having it.

[-] Blakey@hexbear.net 13 points 3 days ago

The uni I attended is (depressingly) embracing LLMs and even they didn't stop in person exams...

[-] Inui@hexbear.net 13 points 3 days ago

To some extent, you have to embrace it. Students are going to use it anyway and the institution isn't going to let you fail 50% of your class every semester. There are good ways and bad ways to do it though and some professors are assigning things that try to get people to reflect on their AI usage, like asking multiple LLMs a question and comparing/contrasting their answers to pick them apart. It's really wreaking havoc on online courses in particular though, which is unfortunate because although I have my criticisms of them, they're a big boon to working adults who want to further their education or change careers.

[-] Le_Wokisme@hexbear.net 12 points 3 days ago

designing tests where the llm will always get it wrong would be a good lesson about not trusting the things

load more comments (2 replies)
[-] InevitableSwing@hexbear.net 22 points 3 days ago

I just had a horrible thought. Soon there could be...

SocratesAIListen your way to knowledge!™

[-] SparkyOrange@hexbear.net 17 points 3 days ago

Please step away from the lathe, I beg you

[-] Meltyheartlove@hexbear.net 13 points 3 days ago* (last edited 3 days ago)

https://socrat.ai/

AI Tools Built for Teaching And Learning. Socrat Helps Teachers and Students Use AI Effectively.

load more comments (2 replies)
load more comments (5 replies)
[-] mrfugu@hexbear.net 37 points 3 days ago

I don’t give a shit if it’s qualitative. If its data you need directly recorded please don’t use the hallucination chat service.

[-] Hohsia@hexbear.net 20 points 3 days ago

Tech bros (and all those who repeat their talking points) are dangerous people and should be treated as such

Sociology students and cheating

Fork found in kitchen

[-] FnordPrefect@hexbear.net 29 points 3 days ago

geordi-no “Children must be taught how to think, not what to think.”

geordi-yes “Children must not be taught what to think, but how to not think.”

[-] ClathrateG@hexbear.net 28 points 3 days ago

I'm gonna prooompt hillgasm

[-] Ram_The_Manparts@hexbear.net 21 points 3 days ago

The 9 prompts are just 9 videos of me loudly farting into a jar.

Sorry.

[-] Blakey@hexbear.net 11 points 3 days ago

Honestly less vulgar than what actually happened shrug-outta-hecks

load more comments (1 replies)
[-] LetterLiker@hexbear.net 14 points 3 days ago

LetterLikian Jihad against the thinking machines and its pathetic acolytes.

[-] barrbaric@hexbear.net 13 points 2 days ago

Agreed except that this implies LLMs can actually think which is ceding too much ground.

[-] CupcakeOfSpice@hexbear.net 10 points 2 days ago

I think in Dune's Butlerian Jihad they considered anything that "thought" on the level of an electronic calculator a thinking machine. An abacus might be alright, but we have Mentats for that.

[-] aqwxcvbnji@hexbear.net 7 points 2 days ago

Dune's Butlerian Jihad

I've seen "Butlerian jihad" used so many times on this site, and never knew it was a Dune reference. I always thought it was some inside joke I didn't get which referenced feminist theorist Judith Butler, in the sense of "we need the Holy War for feminism"

[-] Are_Euclidding_Me@hexbear.net 6 points 2 days ago

I bet that was a little confusing in some contexts! But yeah, that's how Dune manages to be set far in our future and yet computers don't exist. They apparently used to and then all of humanity decided that was Very Bad and destroyed them all, in a conflict called The Butlerian Jihad (I don't think we ever learn where the name comes from). And now mentats do the work that computers used to

[-] Flyberius@hexbear.net 5 points 2 days ago

It's named after Serena Butler. The woman who started it. This is extended Duniverse though. Not in Frank's books

load more comments (2 replies)
load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 13 Mar 2026
145 points (100.0% liked)

Slop.

817 readers
440 users here now

For posting all the anonymous reactionary bullshit that you can't post anywhere else.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No bigotry of any kind, including ironic bigotry.

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target federated instances' admins or moderators.

founded 1 year ago
MODERATORS