1567
Critical thinking (slrpnk.net)
submitted 2 weeks ago by [email protected] to c/[email protected]
top 50 comments
sorted by: hot top new old
[-] [email protected] 99 points 2 weeks ago

Idk, I think we're back to "it depends on how you use it". Once upon a time, the same was said of the internet in general, because people could just go online and copy and paste shit and share answers and stuff, but the Internet can also just be a really great educational resource in general. I think that using LLMs in non load-bearing "trust but verify" type roles (study buddies, brainstorming, very high level information searching) is actually really useful. One of my favorite uses of ChatGPT is when I have a concept so loose that I don't even know the right question to Google, I can just kind of chat with the LLM and potentially refine a narrower, more google-able subject.

[-] [email protected] 135 points 2 weeks ago

trust but verify

The thing is that LLM is a professional bullshitter. It is actually trained to produce text that can fool ordinary person into thinking that it was produced by a human. The facts come 2nd.

[-] [email protected] 49 points 2 weeks ago

Yeah, I know. I use it for work in tech. If I encounter a novel (to me) problem and I don't even know where to start with how to attack the problem, the LLM can sometimes save me hours of googling by just describing my problem to it in a chat format, describing what I want to do, and asking if there's a commonly accepted approach or library for handling it. Sure, it sometimes hallucinate a library, but that's why I go and verify and read the docs myself instead of just blindly copying and pasting.

[-] [email protected] 35 points 2 weeks ago* (last edited 2 weeks ago)

That last step of verifying is often being skipped and is getting HARDER to do

The hallucinations spread like wildfire on the internet. Doesn't matter what's true; just what gets clicks that encourages more apparent "citations". Another even worse fertilizer of false citations is the desire to push false narratives by power-hungry bastards

AI rabbit holes are getting too deep to verify. It really is important to keep digital hallucinations out of the academic loop, especially for things with life-and-death consequences like medical school

load more comments (2 replies)
[-] [email protected] 23 points 2 weeks ago

I don’t trust LLMs for anything based on facts or complex reasoning. I’m a lawyer and any time I try asking an LLM a legal question, I get an answer ranging from “technically wrong/incomplete, but I can see how you got there” to “absolute fabrication.”

I actually think the best current use for LLMs is for itinerary planning and organizing thoughts. They’re pretty good at creating coherent, logical schedules based on sets of simple criteria as well as making communications more succinct (although still not perfect).

load more comments (6 replies)
load more comments (9 replies)
[-] [email protected] 21 points 2 weeks ago

And just as back then, the problem is not with people using something to actually learn and deepen their understanding. It is with people blatantly cheating and knowing nothing because they don’t even read the thing they’re copying down.

[-] [email protected] 18 points 2 weeks ago* (last edited 2 weeks ago)

Something I think you neglect in this comment is that yes, you're using LLMs in a responsible way. However, this doesn't translate well to school. The objective of homework isn't just to reproduce the correct answer. It isn't even to reproduce the steps to the correct answer. It's for you to learn the steps to the correct answer (and possibly the correct answer itself), and the reproduction of those steps is a "proof" to your teacher/professor that you put in the effort to do so. This way you have the foundation to learn other things as they come up in life.

For instance, if I'm in a class learning to read latitude and longitude, the teacher can give me an assignment to find 64° 8′ 55.03″ N, 21° 56′ 8.99″ W on the map and write where it is. If I want, I can just copy-paste that into OpenStreetMap right now and see what horrors await, but to actually learn, I need to manually track down where that is on the map. Because I learned to use latitude and longitude as a kid, I can verify what the computer is telling me, and I can imagine in my head roughly where that coordinate is without a map in front of me.

Learning without cheating lets you develop a good understanding of what you: 1) need to memorize, 2) don't need to memorize because you can reproduce it from other things you know, and 3) should just rely on an outside reference work for whenever you need it.

There's nuance to this, of course. Say, for example, that you cheat to find an answer because you just don't understand the problem, but afterward, you set aside the time to figure out how that answer came about so you can reproduce it yourself. That's still, in my opinion, a robust way to learn. But that kind of learning also requires very strict discipline.

load more comments (2 replies)
[-] [email protected] 8 points 2 weeks ago

To add to this, how you evaluate the students matters as well. If the evaluation can be too easily bypassed by making ChatGPT do it, I would suggest changing the evaluation method.

Imo a good method, although demanding for the tutor, is oral examination (maybe in combination with a written part). It allows you to verify that the student knows the stuff and understood the material. This worked well in my studies (a science degree), not so sure if it works for all degrees?

load more comments (16 replies)
[-] [email protected] 79 points 2 weeks ago

The moment that we change school to be about learning instead of making it the requirement for employment then we will see students prioritize learning over "just getting through it to get the degree"

[-] [email protected] 20 points 2 weeks ago

Well in case of medical practitioner it would be stupid to allow someone to do it without a proper degree.

Capitalism ruining schools. Because people now use school as a qualification requirement rather than centers of learning and skill development

[-] [email protected] 18 points 2 weeks ago

As a medical student, I can unfortunately report that some of my classmates use Chat GPT to generate summaries of things instead of reading it directly. I get in arguments with those people whenever I see them.

load more comments (8 replies)
load more comments (1 replies)
[-] [email protected] 39 points 2 weeks ago

Only topic I am close-minded and strict about.

If you need to cheat as a highschooler or younger there is something else going wrong, focus on that.

And if you are an undergrad or higher you should be better than AI already. Unless you cheated on important stuff before.

[-] [email protected] 29 points 2 weeks ago

This is my stance exactly. ChatGPT CANNOT say what I want to say, how i want to say it, in a logical and factually accurate way without me having to just rewrite the whole thing myself.

There isn't enough research about mercury bioaccumulation in the Great Smoky Mountains National Park for it to actually say anything of substance.

I know being a non-traditional student massively affects my perspective, but like, if you don't want to learn about the precise thing your major is about...... WHY ARE YOU HERE

load more comments (7 replies)
[-] [email protected] 32 points 2 weeks ago

It’s funny how everyone is against using AI for students to get summaries of texts, pdfs etc which I totally get.

But during my time through medschool, I never got my exam paper back (ever!) so the exam was a test where I needed to prove that I have enough knowledge but the exam is also allowed to show me my weaknesses are so I would work on them but no, we never get out papers back. And this extends beyond medschool, exams like the USMLE are long and tiring at the end of the day we just want a pass, another hurdle to jump on.

We criticize students a lot (righfully so) but we don’t criticize the system where students only study becase there is an exam, not because they are particularly interested in the topic at given hand.

A lot of topics that I found interesting in medicine were dropped off because I had to sit for other examinations.

load more comments (3 replies)
[-] [email protected] 30 points 2 weeks ago

Even more concerning, their dependance on AI will carry over into their professional lives, effectively training our software replacements.

load more comments (1 replies)
[-] [email protected] 25 points 2 weeks ago

Students turn in bullshit LLM papers. Instructors run those bullshit LLM papers through LLM grading. Humans need not apply.

[-] [email protected] 24 points 2 weeks ago

The issue as I see it is that college is a barometer for success in life, which for the sake of brevity I'll just say means economic success. It's not just a place of learning, it's the barrier to entry - and any metric that becomes a goal is prone to corruption.

A student won't necessarily think of using AI as cheating themselves out of an education because we don't teach the value of education except as a tool for economic success.

If the tool is education, the barrier to success is college, and the actual goal is to be economically successful, why wouldn't a student start using a tool that breaks open that barrier with as little effort as possible?

[-] [email protected] 7 points 2 weeks ago

especially in a world that seems to be repeatedly demonstrating to us that cheating and scumbaggery are the path to the highest echelons of success.

..where “success” means money and power - the stuff that these high profile scumbags care about, and the stuff that many otherwise decent people are taught should be the priority in their life.

[-] [email protected] 23 points 2 weeks ago* (last edited 2 weeks ago)

Even setting aside all of those things, the whole point of school is that you learn how to do shit; not pass it off to someone or something else to do for you.

If you are just gonna use AI to do your job, why should I hire you instead of using AI myself?

[-] [email protected] 10 points 2 weeks ago

I went to school in the 1980s. That was the time that calculators were first used in class and there was a similar outcry about how children shouldn't be allowed to use them, that they should use mental arithmetic or even abacuses.

Sounds pretty ridiculous now, and I think this current problem will sound just as silly in 10 or 20 years.

[-] [email protected] 8 points 2 weeks ago* (last edited 2 weeks ago)

lol I remember my teachers always saying "you won't always have a calculator on you" in the 90's and even then I had one of those calculator wrist watches from Casio.

And I still suck at math without one so they kinda had a point, they just didn't make it very well.

load more comments (2 replies)
load more comments (7 replies)
load more comments (5 replies)
[-] [email protected] 16 points 2 weeks ago

galileosballs is the last screw holding the house together i swear

[-] [email protected] 15 points 2 weeks ago

This reasoning applies to everything, like the tariff rates that the Trump admin imposed to each countries and places is very likely based from the response from Chat GPT.

[-] [email protected] 15 points 2 weeks ago

I've said it before and I'll say it again. The only thing AI can, or should be used for in the current era, is templating... I suppose things that don't require truth or accuracy are fine too, but yeah.

You can build the framework of an article, report, story, publication, assignment, etc using AI to get some words on paper to start from. Every fact, declaration, or reference needs to be handled as false information unless otherwise proven, and most of the work will need to be rewritten. It's there to provide, more or less, a structure to start from and you do the rest.

When I did essays and the like in school, I didn't have AI to lean on, and the hardest part of doing any essay was.... How the fuck do I start this thing? I knew what I wanted to say, I knew how I wanted to say it, but the initial declarations and wording to "break the ice" so-to-speak, always gave me issues.

It's shit like that where AI can help.

Take everything AI gives you with a gigantic asterisk, that any/all information is liable to be false. Do your own research.

Given how fast things are moving in terms of knowledge and developments in science, technology, medicine, etc that's transforming how we work, now, more than ever before, what you know is less important than what you can figure out. That's what the youth need to be taught, how to figure that shit out for themselves, do the research and verify your findings. Once you know how to do that, then you'll be able to adapt to almost any job that you can comprehend from a high level, it's just a matter of time patience, research and learning. With that being said, some occupations have little to no margin for error, which is where my thought process inverts. Train long and hard before you start doing the job.... Stuff like doctors, who can literally kill patients if they don't know what they don't know.... Or nuclear power plant techs... Stuff like that.

[-] [email protected] 30 points 2 weeks ago* (last edited 2 weeks ago)

When I did essays and the like in school, I didn’t have AI to lean on, and the hardest part of doing any essay was… How the fuck do I start this thing?

I think that this is a big part of education and learning though. When you have to stare at a blank screen (or paper) and wonder "How the fuck do I start?" Having to brainstorm write shit down 50 times, edit, delete, start over. I think that process alone makes you appreciate good writing and how difficult it can be.

My opinion is that when you skip that step you skip a big part of the creative process.

[-] [email protected] 7 points 2 weeks ago* (last edited 2 weeks ago)

If not arguably the biggest part of the creative process, the foundational structure that is

load more comments (5 replies)
load more comments (1 replies)
[-] [email protected] 14 points 2 weeks ago

Well that disqualifies 95% of the doctors I've had the pleasure of being the patient of in Finland.

It's just not LLM:'s they're addicted to, it's bureaucracy.

[-] [email protected] 13 points 2 weeks ago* (last edited 2 weeks ago)

No child left behind already stripped it from public education...

Because there was zero incentives for a school performing well. And serious repercussions if a school failed multiple years, the worst schools had to focus only what was on the annual test. The only thing that matters was that year's scores, so that was the only thing that got taught.

If a kid got it early. They could be largely ignored so the school could focus on the worst.

It was teaching to the lowest common denominator, and now people are shocked the kids who spent 12 years in that system don't know the things we stopped teaching 20+ years ago.

Quick edit:

Standardized testing is valuable. For lots of rural kids getting 99*'s was how they learned they were actually smart and just for in their tiny schools.

The issue with "no child left behind" was the implementation and demand for swift responses to institutional problems that had been developing for decades. It's the only time moderates and Republicans agreed to do something fast, and it was obviously something that shouldn't be rushed.

load more comments (1 replies)
[-] [email protected] 12 points 2 weeks ago

With such a generic argument, I feel this smartass would come up with the same shitty reasoning if it came to using calculators and wikipedia or google when those things were becoming mainstream.

Using "AI to get through college" can mean a lot of different things for different people. You definitely don't need AI to "set aside concern for truth" and you can use AI to learn things better/faster.

[-] [email protected] 8 points 2 weeks ago

I mean I'm far away from my college days at this point. However, I'd be using AI like a mofo if I still were.

Mainly because there was so many unclear statements in textbooks (to me) and if I had someone I could ask stupid questions to, I could more easily navigate my university career. I was never really motivated to "cheat" but for someone with huge anxiety, it would have been beneficial to more easily search for my stuff and ask follow up questions. That being said, tech has only gotten better, and I couldn't find half the stuff I did growing up that's already on the Internet even without AI.

I'm hoping more students would use it as a learning aid rather than just generating their work for though. There was a lot of people taking shortcuts and "following the rules" feels like an unvalued virtue when I was in Uni.

The thing is that education needs to adapt fast and they're not typically known for that. Not to mention, most of the teachers I knew would have neither the creativity/skills, nor the ability, nor the authority to change entire lesson plans instantly to deal with the seismic shift we're dealing with.

load more comments (10 replies)
[-] [email protected] 11 points 2 weeks ago

How people think I use AI "Please write my essay and cite your sources."

How I use it
"please make my autistic word slop that I wrote already into something readable for the nerotypical folk, use simple words, make it tonally neutral. stop using emdashes, headers, and list and don't mess with the quotes"

load more comments (3 replies)
[-] [email protected] 10 points 2 weeks ago

Gotta say, if someone gets through medical school with AI, we're fucked.

load more comments (1 replies)
[-] [email protected] 10 points 2 weeks ago

Okay but I use AI with great concern for truth, evidence, and verification. In fact, I think it has sharpened my ability to double-check things.

My philosophy: use AI in situations where a high error-rate is tolerable, or if it's easier to validate an answer than to posit one.

There is a much better reason not to use AI -- it weakens one's ability to posit an answer to a query in the first place. It's hard to think critically if you're not thinking at all to begin with.

load more comments (1 replies)
[-] [email protected] 7 points 2 weeks ago

We weren't verifying things with our own eyes before AI came along either, we were reading Wikipedia, text books, journals, attending lectures, etc, and accepting what we were told as facts (through the lens of critical thinking and applying what we're told as best we can against other hopefully true facts, etc etc).

I'm a Relaxed Empiricist, I suppose :P Bill Bailey knew what he was talking about.

load more comments (11 replies)
load more comments
view more: next ›
this post was submitted on 19 May 2025
1567 points (97.9% liked)

Microblog Memes

7888 readers
3282 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS