this post was submitted on 05 May 2025
117 points (100.0% liked)

Main, home of the dope ass bear.

15928 readers
228 users here now

THE MAIN RULE: ALL TEXT POSTS MUST CONTAIN "MAIN" OR BE ENTIRELY IMAGES (INLINE OR EMOJI)

(Temporary moratorium on main rule to encourage more posting on main. We reserve the right to arbitrarily enforce it whenever we wish and the right to strike this line and enforce mainposting with zero notification to the users because its funny)

A hexbear.net commainity. Main sure to subscribe to other communities as well. Your feed will become the Lion's Main!

Good comrades mainly sort posts by hot and comments by new!


gun-unity State-by-state guide on maintaining firearm ownership

guaido Domain guide on mutual aid and foodbank resources

smoker-on-the-balcony Tips for looking at financials of non-profits (How to donate amainly)

frothingfash Community-sourced megapost on the main media sources to radicalize libs and chuds with

just-a-theory An Amainzing Organizing Story

feminism Main Source for Feminism for Babies

data-revolutionary Maintaining OpSec / Data Spring Cleaning guide


ussr-cry Remain up to date on what time is it in Moscow

founded 4 years ago
MODERATORS
 

I'm sorry, so fucking angry. Students with sources that don't exist. Students with sources that exist but then the quotation doesn't exist.

I'm so fucking mad, because it's extra work for me (that I'm sure as hell not getting compensated for), and it also entirely defeats the purpose of the fucking class (it's writing/research, so like, engaging in a discipline and looking at what's been written before on your topic, etc.)

Kill me please. Comrades, I'm so tired. I just want to teach writing. I want to give students a way to exercise agency in the world -- to both see bad arguments and make good ones. They don't care. I'm so tired.

BTW, I took time to look up some of these sources my student used, couldn't find the quotes they quote, so told them the paper is an "A" if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is -- no matter the method, fabrication of evidence is justification for failing work)?

foucault-madness agony-shivering allende-rhetoric

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 63 points 1 week ago (1 children)

Trash Future repeatedly makes the point that AI chat bots are the inverse of the printing press. The printing press created a way for information to be reliably stored, retrieved, and exchanged. It created a sort of ecosystem where ideas (including competing ideas) could circulate in society.

Chat bots do the opposite. They basically destroy the reliable transmission of information and ideas. Instead of creating reliable records of human thought (models, stories, theories, etc.), it's a black box which randomly messes with averages. It's so fucking harmful

[–] [email protected] 23 points 1 week ago* (last edited 1 week ago) (1 children)

This makes no sense because it gives the general problem of epistemology a year zero date of November 30th, 2022.

People were lying, distorting and destroying prior to the invention of the printing press. For example one of the most obvious is the Donation of Constantine which the Catholic Church used to extort European kings starting 8th century.

The printing press actually made things worse. For example the Gospel of Barnabas was thought to have been so widely proliferated because the forger printed fabricated copies of the the Gelasian Decree.

Creating reliable records of "human thought" doesn't matter because the problem isn't one of what do people think, it's what is the actual truth. This isn't even the first system that greatly obscures historical thought for the benefit of a select few. If you were a peasant in 1500's your ChatGPT was the conspiracy between your local lord and your local pastor to keep you compliant. The German peasants literally fought a war over it.

There is no place in academia in which an LLM would be a reliable store of information because it's a statistical compilation not a deterministic primary source, secondary or tertiary source. Trash Future as always is tilting at windmills erected by capitalist propaganda.

[–] [email protected] 21 points 1 week ago (1 children)

because it gives the general problem of epistemology a year zero date of November 30th, 2022.

No it doesn't. It's just pointing out that a slop machine was invented around then. The printing press enabled information to be shared at a much greater scale than before. The LLM has enabled slop to be produced at a much greater scale than before. It's a question of degree.

Creating reliable records of "human thought" doesn't matter because the problem isn't one of what do people think, it's what is the actual truth.

And how is truth determined? What do you call a truth that nobody believes? Global warming is happening whether or not people believe in it. But it could have been avoided if more people believed in climate change, and also believed it was worth taking action against. IDK what to think about some platonic ideal of "truth"

There is no place in academia in which an LLM would be a reliable store of information because it's a statistical compilation not a deterministic primary source, secondary or tertiary source.

Do you not see how the general public is actually using LLMs? It's bleeding into academia, too

[–] [email protected] 15 points 1 week ago* (last edited 1 week ago) (1 children)

The LLM has enabled slop to be produced at a much greater scale than before. It’s a question of degree.

The problem with this is that LLM allows you to make slop to a much greater scale than ever before. Commercial, and institutional slop has been at these scales for centuries. Monasteries were literally LLMs that forged legal documents en-masse all over Europe.. The Gulf of Tonkin Resolution was literally birthed by slop. LLMs make institutional and commercial slop cheaper but the scale is still limited by the capitalist class controlling the risk of brand dilution and loss of trust.

There are way better arguments against LLMs, ex. that it pushes automation of social engineering scams into overdrive. The idea that there was ever a "truth world" and a "post truth" world that LLMs have created is ignorant. People have always fallen victim to these sorts of things en-masse. As far as the workers involved, I can see the argument that databases destroyed secretarial pools and jobs, but I'm not going to extend the sympathy to the "honorable workers" at the racism factory.

The racism factory has always existed, they've just made it super cheap to order from now.

And how is truth determined? What do you call a truth that nobody believes? Global warming is happening whether or not people believe in it. But it could have been avoided if more people believed in climate change, and also believed it was worth taking action against. IDK what to think about some platonic ideal of “truth”

Again this is the general problem of epistemology. It has no answer. Turning away from it puts you right back to ordering from the racism factory because the whole point of learning about epistemology, journalism, sourcing, research and the scientific method is to give you tools to realize that you're ordering from the racism factory. Manufacturing consent happens regardless about what you think about the latest tooling. What OP is attempting to teach their students is how to research and communicate truthfully -- regardless of the tooling they are simply engaging in forgery.

You can literally make the same argument about the printing press itself, that it enabled greater forgery to happen than what was possible prior. Which was my point in the original reply.

Do you not see how the general public is actually using LLMs? It’s bleeding into academia, too

Sure, but we're not talking about the general public. We're talking about collegiate course work, at the end of the day if the assignment is to understand proper research and sourcing directly using the output of an LLM should be an instant failure because it avoids the skills you should be exercising.

Yes, LLMs have strained(broken?) the already austere and unrealistic expectations of academic systems across all levels. We should be leveraging that to reforge these systems to better serve students and educators alike rather than trying to get the cat back in the bag regarding LLMs. You can only "defeat" LLMs by raising people up in spite of the existence of LLMs.

[–] [email protected] 12 points 1 week ago* (last edited 1 week ago) (3 children)

The problem with this is that LLM allows you to make slop to a much greater scale than ever before.

Yeah, and the printing press allowed weirdos like Martin Luther to spread ideas faster. You're just repeating what I said. That's exactly what I meant by "The LLM has enabled slop to be produced at a much greater scale than before" and "it's a question of degree"

The idea that there was ever a "truth world" and a "post truth" world that LLMs have created is ignorant

I didn't say or imply that!!

Again this is the general problem of epistemology. It has no answer.

Yes, and the whole point of the paragraph is that I'm not talking about "truth". I specifically said that I don't know what to think of some platonic ideal of "truth", scare quotes and all. The whole point of that paragraph is that I'm not talking about heady epistemology. I'm just talking about social phenomena and human behavior. That's why I talked about "human thought". That's why I talked about "information", which is a separate concept from "truth". I was specifically avoiding talking about "truth", but you brought it up

You can literally make the same argument about the printing press itself, that it enabled greater forgery to happen than what was possible prior. Which was my point in the original reply.

Yes, forgeries carry information. I talked about information and models in my first post, not truth

Yes, LLMs have strained(broken?) the already austere and unrealistic expectations of academic systems across all levels. We should be leveraging that to reforge these systems to better serve students and educators alike rather than trying to get the cat back in the bag regarding LLMs.

We should be dismantling the industrial chatbots, and redirecting all that compute towards something useful. Get that cat in the fucking bag. I feel like we basically agree on everything, you're just seeing an epistemological argument where there is none. I was trying to avoid epistemology from the start.

Maybe TF talked about "post-truth" and I just don't recall? But I'm pretty sure they were talking about information as well

load more comments (3 replies)
[–] [email protected] 53 points 1 week ago (1 children)

cheating in education in general, AI or not, is more caused by the financial and systemic repercussions of failing. When these students fail a class, it's often another few thousand dollars they don't have down the drain, and if they fail too many classes it locks them out of higher education entirely

failure is one of the biggest drivers of true learning, and the educational system directly discourages it

[–] [email protected] 38 points 1 week ago

Oh I get that -- the financial reality is there for sure, and I recognize they have other classes, etc. Don't get me wrong, I know who the "true" villain is.

Doesn't mean I can't be mad at these AI companies for unleashing this on us. It actively makes teaching the skills to understand writing harder since students can get close to "good" writing with these machines, but the writing it produces crumbles under the slightest scrutiny. We're actively harming thought and understanding with them.

[–] [email protected] 52 points 1 week ago

Kids can't even cheat properly anymore, because of woke

[–] [email protected] 51 points 1 week ago (4 children)

I tried using AI to help find sources for my partners thesis. It's a niche topic on body phenomenology and existentialism in pregnancy and birth. Instead, it cited Heidegger books that don’t even exist. A colleague recommended it, but honestly, you would have to be insane to rely on this.

[–] [email protected] 34 points 1 week ago* (last edited 1 week ago)

The more specific the information the more it lies

[–] [email protected] 33 points 1 week ago (2 children)

I get so annoyed when people tell me to ask an AI something. It has no knowledge and no capacity for reason. The only thing it can do is produce an output that an inexpert human could potentially accept as true because the underlying statistics favour sequences of characters that, when converted to text and read by a human, appear to have a confident tone. People talk about AI hallucinating wrong answers and that's giving it too much credit; either everything it outputs is a hallucination that's accepted more often than not, or nothing it outputs is a hallucination because it's not conscious and can't hallucinate, it's just printing sequential characters.

[–] [email protected] 21 points 1 week ago

it's not all that far from the "post what your autocorrect completes this sentence as" thing
the llms are considerably more sophisticated sure, but what they do is fundamentally the same

[–] [email protected] 17 points 1 week ago

It's advanced autocorrect. Calling it AI is an insult to Skynet.

[–] [email protected] 16 points 1 week ago

LLMs in general cannot handle finding quotes because they can't discern between real ideas or regurgitating ideas in a slightly different format.

load more comments (1 replies)
[–] [email protected] 35 points 1 week ago (1 children)

told them the paper is an "A" if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is -- no matter the method, fabrication of evidence is justification for failing work)?

If the policy for plagiarism at your school is a F on the assignment, that seems fair to me. Asking LLMs to do your work is plagiarism.

[–] [email protected] 29 points 1 week ago* (last edited 1 week ago) (8 children)

I mean, I could go to that, but I figure as a writer, to fabricate quotations and evidence is fundamentally failing work.

I'm trying to give the student the chance to save themselves too. If they just cited that (for instance) the quotation about "all great historical figures appear twice" was from The German Ideology instead of 18th Brumaire that's not a problem -- the quotation exists, it's simply the student being sloppy at documentation.

However, to claim that someone stated something they didn't -- that's just fundamentally failing work (it would be like going online and saying Mao said that "power grows out of the hands of the peasantry" instead of "power grows out of the barrel of a gun").

I should note - my class has a policy that students can use AI as long as they clear it with me. However, they're responsible for their work, and I won't accept work with fake quotes. That's dogshit writing.

[–] [email protected] 16 points 1 week ago (1 children)

to fabricate quotations and evidence is fundamentally failing work.

it would be writing fiction, if they weren't using an llm

[–] [email protected] 14 points 1 week ago

Seems generous tbh, if I submitted a work with incorrect citing I would lose marks and I would have to accept it, because that's fair enough

load more comments (6 replies)
[–] [email protected] 32 points 1 week ago

(it's writing/research, so like, engaging in a discipline and looking at what's been written before on your topic, etc.)

BTW, I took time to look up some of these sources my student used, couldn't find the quotes they quote, so told them the paper is an "A" if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is -- no matter the method, fabrication of evidence is justification for failing work)?

I think they will learn an important life lesson: that if they're going to cheat, then they have to, at a minimum, be sure that they are at least "getting the right answer". The tide of AI dystopia is unstoppable, but you can at least teach them that they can't just completely shut their brains off to the extent that they are just presenting completely fabricated research and factual claims.

[–] [email protected] 31 points 1 week ago

Seems a fair policy. I like to imagine if you stress this policy up front in advance, students might actually check and verify all their own sources (and thus actually do their own research even with ai stuff)

[–] [email protected] 29 points 1 week ago (1 children)

Using AI to write papers for a writing class is like using speech to text for a touch typing course. You’re bypassing the exercises that will actually provide the value you’re paying for

[–] [email protected] 24 points 1 week ago (1 children)

I wish they understood this. I tried to explain to them that I want them to learn how to write so that when they use AI they'll see what's useful or not from it.... it's heartbreaking.

[–] [email protected] 15 points 1 week ago

The end point is devaluation of the digital

Future value will reside in the tactile

[–] [email protected] 26 points 1 week ago* (last edited 1 week ago) (3 children)

I'm going back to school for my Master's degree. I am in my 30s and I have realized that "Generative AI" does nothing but cheat me out of learning. Using AI for your homework/assignments does nothing but outsource your thinking and learning to the machine. It makes you dumber.

load more comments (3 replies)
[–] [email protected] 26 points 1 week ago

BTW, I took time to look up some of these sources my student used, couldn't find the quotes they quote, so told them the paper is an "A" if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is -- no matter the method, fabrication of evidence is justification for failing work)?

Most class syllabuses I've seen tie LLM into the same category as plagiarism. That's an automatic failure on the assignment and sometimes failure of the class.

[–] [email protected] 23 points 1 week ago (2 children)

It seems 100% fair to me. Using AI will be a big part of the future, but if your class is about a particular set of skills that don't involve asking Computer-Daddy to do your homework for you then good on you for trying to ensure it.

[–] [email protected] 24 points 1 week ago* (last edited 1 week ago) (2 children)

Using AI will be a big part of the future,

Yet absolutely NONE of the people pushing for this future educate people about the limitations of LLM chatbots. In fact, they deliberately mislead the public. I think about the doctor scene in Idiocracy a lot these days

[–] [email protected] 15 points 1 week ago

look the LLM will just write the code for a better LLM and the better one won't have those limitations

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 23 points 1 week ago* (last edited 1 week ago) (2 children)

if two students are using exact same (unneeded) variable in a script, did they do similar prompt or do they talk to each other saruman-orb

fucking hate this shit, something which could be done in like 20 lines of code is like 200. I don't particularly have to care, cause i'm not teaching programming but jesus christ

load more comments (2 replies)
[–] [email protected] 23 points 1 week ago

Your policy is fair, because then your students would hypothetically actually use their damn brains a tiny bit, which is what school should be about

I would also posit that any false quotation could just be docked one letter grade, so 5 of them is an F

[–] [email protected] 20 points 1 week ago (1 children)

I had to do this in like 2008 for a humanities course about web 2.0, because my paper was about forum types and how different designs had different outcomes, etc. I literally could find almost no relevant stuff to quote so eventually I just fully bullshitted lol.

[–] [email protected] 21 points 1 week ago

Yeah that's definitely not an excuse for this paper. Also, my move in that position was always to just find a source broader than my topic and apply it like a lens. Works pretty well.

[–] [email protected] 19 points 1 week ago* (last edited 1 week ago) (1 children)

Pass Fail Verbal exams. Seriously just ask them a random very simple question about their paper and if they cant answer it then 0%.

This bypasses the AI problem because if they've gone to all the trouble of making a fake paper and then learning it's bullshit by heart then hopefully they've learned something about how papers are written, even if by accident.

[–] [email protected] 15 points 1 week ago

Yes that sort of thing works well, but it takes a huge amount of extra time to actually do properly.

[–] [email protected] 19 points 1 week ago (1 children)

I was in conversation with a friend who works in tech and we were talking about a thing that we wanted to find some science on. So I found a paper on it and started to read, but before I got done he replied by sending me a chat gpt summary of the paper. And I could already tell it wasn't correct from reading it myself even a little. What I really wanted to say to him was that I'd rather think for myself, tyvm.

If students are all doing this now, nobody will think of anything themselves anymore or form an actual deep understanding of a thing, be it whatever. Not really. Anyone who has done reading knows how that shapes us and then can develop into something deeper. It's the f'n "innovation" these same tech types go on and on about that dies with this tech.

[–] [email protected] 18 points 1 week ago* (last edited 1 week ago) (1 children)

I was in conversation with a friend who works in tech and we were talking about a thing that we wanted to find some science on. So I found a paper on it and started to read, but before I got done he replied by sending me a chat gpt summary of the paper. And I could already tell it wasn’t correct from reading it myself even a little. What I really wanted to say to him was that I’d rather think for myself, tyvm.

This happens in academia without LLMs. Plenty of official tertiary sources used in curricula across the US insinuate that Adam Smith described capitalism as an "invisible hand of the market" because Paul Samuelson is a propagandist.

How many people claim they read Marx and then go on to make a stupid statement about how "Marx did not consider some brain dead topic the speaker just came up with"?

LLM's merely accelerate these accepted and proliferated social practices. Don't get me wrong this is "new", but many people are arguing as if guns don't exist and LLMs are a nuke. When in practice we've been shooting bolt action rifles and LLMs are a gas operated machine gun.

load more comments (1 replies)
[–] [email protected] 19 points 1 week ago (1 children)

Fail them on the first fake citation. Does your uni have SafeAssign or Turnitin?

[–] [email protected] 15 points 1 week ago (2 children)

No that shit is vaporware (the AI detection).

I use TII to verify quotes are real now (I turn off the quotation parsing thing so everyone's paper should be between 20-35 percent "plagiarized"). AI detection is vaporware tho, may as well flip a coin.

load more comments (2 replies)
[–] [email protected] 17 points 1 week ago (1 children)

I'm getting my Masters right now at a college that has a really big foreign student population, particularly Desi people. I was in a class with this one kid who straight up did not speak English, like not he spoke it poorly or with a heavy accent, just did not speak it at all. He had ChatGPT on his laptop every class. At the end of the semester we had student presentations, this kid presented a slide show that was clearly made by AI, bad images and all, and kept trying to speak English phonetically. It was awkward as fuck.

[–] [email protected] 17 points 1 week ago* (last edited 1 week ago)

I would put some of this on the school because some colleges allow people to meet English language requirements just with things like Duolingo scores. Which would be ridiculous if I were trying to get into a university in just about any other country.

It leads to them struggling in all their classes since they can't comprehend the material, slowing things down for everyone else, taking extra resources in advising, tutoring, etc that is all trying to compensate for the lack of language skills instead of providing the actual service they were meant for.

All because the school is relentlessly pursuing inflated international tuition.

[–] [email protected] 17 points 1 week ago

Your policy is fine, but what matters is the schools academic integrity rules and procedures. Usually you have to report it if you suspect they cheated or plagiarized.

I've heard some schools have given up on AI though and just refuse to pursue it. Made up citations is an easy catch though.

[–] [email protected] 14 points 1 week ago

being a teacher has to be absolute hell these days, I'm so sorry

[–] [email protected] 14 points 1 week ago

I would make having photo evidence of the citations available on request a prerequisite. They all have cell phones that can make this process very easy and screenshots of course are trivial.

[–] [email protected] 13 points 1 week ago* (last edited 1 week ago)

Bwaaa "Tell you what Bawby, back in mah day, we used to type the main topic of our homework on wikipedia, paraphrase the best quotes, and then copy and paste the citation links for those quotes into the same citation generator the teacher would recommend in the class syllabus. Gosh darn kids don't even know how to bullshit their work enough for an overworked public school teacher to give it a B-."

[–] [email protected] 12 points 1 week ago (1 children)

Is this for a creative writing course? Because wtf

[–] [email protected] 16 points 1 week ago

Writing and research. So not "creative" - it's like, citing your sources in an argument, researching social issues, etc.

[–] [email protected] 11 points 1 week ago

Shit like this makes me kinda glad I dropped out rather than continue in higher ed. I can see how like 80% of my classmates back then would be the people using AI now. Maybe I'll go back to school for something worthwhile one day, I did have some decent philosophy and sociology professors, but the in-major stuff was just utterly useless.

I don't even know why this inspires this sort of reaction in me, I wasn't the type to use AI for everything, and I actually was starting to kinda enjoy writing papers towards the end, it wouldn't have affected me personally much in that sense, but (among many other things) being surrounded by people who fundamentally didn't care and did their best to cheat through it, or who had clearly been failed by their prior education, was pretty demoralizing, so I guess this lands in that category

load more comments
view more: next ›