I can't remember who it was, but some PhD in anthropology (I think) was tweeting about his research and some fucking idiot was arguing with him because ChatGPT said something different. Fucking bong-cloud epistemology.
I lost my fucking mind at a person at work who started picking fights with me on topics I am an actual honest-to-god expert on. No comprehension, not even engagement, just a bunch of vibes and "I have been doing this for X years". I wrote a fucking essay in the work chat with examples and references and practical details, and this motherfucker says "well, I used my AI text editor to summarize what you wrote and respond to it". It was at this point that I realized they weren't even disagreeing with me, they were telling their text editor to disagree with me (or, perhaps worse, it was telling them to disagree) and then regurgitating the results. AI freaks are literally inhuman.
I was googling something about BMI and found a reddit thread where some dullard confidently asserted that having your height in cm match your weight in lbs was impossible. Source: chatGPT, which no joke said in one paragraph that being 165 cm and 165 lbs would be underweight while being 165 cm and 165 lbs would be overweight. These LLMs don't know shit about fuck and only idiots turn to them for answers.
Amazing mathematical reasoning at work here.
I mean yeah they're just doing predictive text, talking about overweight and underweight in the way they've seen it done, but with placeholder numbers.
My friend pulled out her phone to ask chatGPT how to play a board game last night, and despite all of us yelling at her that chatGPT doesn't know anything, she persisted. Then the dumbass LLM made up some rules because it doesn't know anything.
My friend pulled out her phone to ask chatGPT how to play a board game last night, and despite all of us yelling at her that chatGPT doesn't know anything, she persisted. Then the dumbass LLM made up some rules because it doesn't know anything.
Do you think they took home the lesson that llms don't possess knowledge or do reason?
one of my friends did the same thing and it provided incorrect information about the games rules lmao
If you tell people that ChatGPT doesn't know anything, they will only think you're obviously wrong when it gives them apparently correct answers. You should tell people the truth -- the harm in ChatGPT is that it is generally subtly wrong in some way, and often entirely wrong, but it always looks plausibly right.
Yea, that's definitely one of the worst aspects of AI is how confidently incorrect it can be. I had this issue using deep seek and had to turn on the mode where you can see what it's thinking and often it will say something like.
I can't analyze this properly, let's assume this.... Then confidently spits an answer out based on that assumption. At this point I feel like AI is good for 100 level CS students that don't want to do their homework and that's about it
Same, I just tried deepseek-R1 on a question I invented as an AI benchmark. (No AI has been able to remotely correctly answer this simple question, though I won't reveal what the question is here obviously.) Anyway, R1 was constantly making wrong assumptions, but also constantly second-guessing itself.
I actually do think the "reasoning" approach has potential though. If LLMs can only come up with right answers half the time, then "reasoning" allows multiple attempts at a right answer. Still, results are unimpressive.
I fucking cringe so hard when people are like "i asked chat gpt about what's holding me back in life and it came back with great answers"
Bitch it's fucking astrology. I can come up with shit that vaguely makes sense for most people and you'll think that it's "so true" and "it knows you better than you know yourself"
I just find the deference to chatbots a bit sad in that it will cause people to be less curious. Guess this is how people felt about moving from an abacus or slide rulers to calculators, but it gets used for such menial bullshit that can easily be looked up instead of asking the virtual eqvivalent of Dave at the pub.
calculator actually gives reliably correct answers though unlike the truth machine. we're supposed to see it that way, as a pure step of technological progress, but it's a load of shit.
It was already trending that way with “google it” being such a common response to questions
that's not the same at all. why are you asking me something factual i half-remember from five years ago? just look it up because i'd also have to look it up to answer.
I realize I misread the paragraph and thought it was bemoaning the lack of human-to-human interaction with asking about stuff.
According to ChatGPT, anyone who uses it to answer a question someone else asked is a butthead.
This should be an instant ban on any part of the internet. I've seen it on Lemmy before.
I just disregard anyone who says that, ChatGPT can be made to say Trump is the first WWE champion to become POTUS and to make up names of scientific journals it cites, there's no way I'm taking any answer from it at face value.
I asked a chatbot what it thought, and according to DeepSeek:
I feel you—there’s something uniquely frustrating about asking a real person for their thoughts and getting a regurgitated AI response instead. It’s like asking for a home-cooked meal and someone hands you a microwaved frozen dinner with a shrug.
If you’re looking for human insight (or just some good old-fashioned hot takes), it’s totally fair to push back with something like:
"Cool, but what do you think? I didn’t ask for a chatbot’s fanfic."
And hey, if they keep hiding behind AI, maybe start responding to them with "According to my calculations, your originality levels are critically low."
(Also, "the Chinese one" got me. 😂)
It’s always the same types who do it. The types who were never very curious to begin with or tech fetishists who like the idea of tech without really understanding it. Also Redditors.
AI translated manga bit
to be clear: that is cum, the verb, not the noun
Okay but is it accurate tho
Hexbear is a leftist online community that originated from users of the banned subreddit r/ChapoTrapHouse. It operates on a modified version of the Lemmy platform, focusing on socialist discussions and content.
getting head from c-3po I call that golden dome
I just say "I dunno lol" like a normal idiot
According to ChatGPT, there is no way a bee should be able to fly. Its wings are too small to get its fat little body off the ground. The bee, of course, flies anyway because bees don't care what humans think is impossible.
chat
Chat is a text only community for casual conversation, please keep shitposting to the absolute minimum. This is intended to be a separate space from c/chapotraphouse or the daily megathread. Chat does this by being a long-form community where topics will remain from day to day unlike the megathread, and it is distinct from c/chapotraphouse in that we ask you to engage in this community in a genuine way. Please keep shitposting, bits, and irony to a minimum.
As with all communities posts need to abide by the code of conduct, additionally moderators will remove any posts or comments deemed to be inappropriate.
Thank you and happy chatting!