this post was submitted on 07 Apr 2025
346 points (96.3% liked)

tumblr

4060 readers
616 users here now

Welcome to /c/tumblr, a place for all your tumblr screenshots and news.

Our Rules:

  1. Keep it civil. We're all people here. Be respectful to one another.

  2. No sexism, racism, homophobia, transphobia or any other flavor of bigotry. I should not need to explain this one.

  3. Must be tumblr related. This one is kind of a given.

  4. Try not to repost anything posted within the past month. Beyond that, go for it. Not everyone is on every site all the time.

  5. No unnecessary negativity. Just because you don't like a thing doesn't mean that you need to spend the entire comment section complaining about said thing. Just downvote and move on.


Sister Communities:

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 45 minutes ago

Wait, people actually try to use gpt for regular everyday shit?

I do lorebuilding shit (in which gpt's "hallucinations" are a feature not a bug), or I'll just ramble at it while drunk off my ass about whatever my autistic brain is hyperfixated on. I've given up on trying to do coding projects, because gpt is even worse at it than I am.

[–] [email protected] 2 points 1 hour ago (1 children)

Using AI is helpful, but by no means does it replace your brain. Sure, it can write emails and really helps with code, but anything beyond basic troubleshooting and "short" code streams, it's an assistant, not an answer.

[–] [email protected] 1 points 13 minutes ago

Yeah, I don't get the people who think it'll replace your brain. I find it useful for learning even if it's not always entirely correct but that's why I use my brain too. Even if it gets me 60% of the way there, that's useful.

[–] [email protected] 2 points 2 hours ago

I'm using it to learn to code! If anyone wants to try my game let me know I'll figure out a way to send it.

[–] [email protected] 4 points 2 hours ago (1 children)

I use ChatGPT mainly for recipes, because I'm bad at that. And it works great, I can tell it "I have this and this and this in my fridge and that and that in my pantry, what can I make?" and it will give me a recipe that I never would have come up with. And it's always been good stuff.

And I do learn from it. People say you can't learn from using AI, but I've gotten better at cooking thanks to ChatGPT. Just a while ago I learned about deglazing.

[–] [email protected] 2 points 55 minutes ago

You should try this thing, its pretty neat, just press maya or miles. Though it requires a microphone so you may have to open it on your phone.

https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice#demo

[–] [email protected] 41 points 10 hours ago (4 children)

I feel like it's an unpopular take but people are like "I used chat gpt to write this email!" and I'm like you should be able to write email.

I think a lot of people are too excited to neglect core skills and let them atrophy. You should know how to communicate. It's a skill that needs practice.

[–] [email protected] 13 points 6 hours ago* (last edited 5 hours ago)

This is a reality as most people will abandon those skills, and many more will never learn them to begin with. I'm actually very worried about children who will grow up learning to communicate with AI and being dependent on it to effectively communicate with people and navigate the world, potentially needing AI as a communication assistant/translator.

AI is patient, always available, predicts desires and effectively assumes intent. If I type a sentence with spelling mistakes, chatgpt knows what I meant 99% of the time. This will mean children don't need to spell or structure sentences correctly to effectively communicate with AI, which means they don't need to think in a way other human being can understand, as long as an AI does. The more time kids spend with AI, the less developed their communication skills will be with people. GenZ and GenA already exhibit these issues without AI. Most people go experience this communicating across generations, as language and culture context changes. This will emphasize those differences to a problematic degree.

Kids will learn to communicate will people and with AI, but those two styles with be radically different. AI communication will be lazy, saying only enough for AI to understand. With communication history, which is inevitable tbh, and AI improving every day, it can develop a unique communication style for each child, what's amounts to a personal language only the child and AI can understand. AI may learn to understand a child better than their parents do and make the child dependent on AI to effectively communicate, creating a corporate filter of communication between human being. The implications of this kind of dependency are terrifying. Your own kid talks to you through an AI translator, their teachers, friends, all their relationships could be impacted.

I have absolutely zero beleif that the private interests of these technology owners will benefit anyone other than themselves and at the expense of human freedom.

[–] [email protected] 2 points 4 hours ago (1 children)

I know someone who very likely had ChatGPT write an apology for them once. Blew my mind.

[–] [email protected] 1 points 10 minutes ago

I use it to communicate with my landlord sometimes. I can tell ChatGPT all the explicit shit exactly as I mean it and it'll shower it and comb it all nice and pretty for me. It's not an apology, but I guess my point is that some people deserve it.

load more comments (2 replies)
[–] [email protected] 8 points 10 hours ago* (last edited 10 hours ago) (3 children)

I've tried a few GenAI things, and didn't find them to be any different than CleverBot back in the day. A bit better at generating a response that seems normal, but asking it serious questions always generated questionably accurate responses.

If you just had a discussion with it about what your favorite super hero is, it might sound like an actual average person (including any and all errors about the subject it might spew), but if you try to use it as a knowledge base, it's going to be bad because it is not intelligent. It does not think. And it's not trained well enough to only give 100% factual answers, even if it only had 100% factual data entered into it to train on. It can mix two different subjects together and create an entirely new, bogus response.

load more comments (3 replies)
[–] [email protected] 7 points 10 hours ago (2 children)

Oh hey it's me! I like using my brain, I like using my own words, I can't imagine wanting to outsource that stuff to a machine.

Meanwhile, I have a friend who's skeptical about the practical uses of LLMs, but who insists that they're "good for porn." I can't help but see modern AI as a massive waste of electricity and water, furthering the destruction of the climate with every use. I don't even like it being a default on search engines, so the idea of using it just to regularly masturbate feels ... extremely selfish. I can see trying it as a novelty, but for a regular occurence? It's an incredibly wasteful use of resources just so your dick can feel nice for a few minutes.

[–] [email protected] 5 points 9 hours ago (1 children)

Using it for porn sounds funny to me given the whole concept of "rule 34" being pretty ubiquitous. If it exists, there's porn of it! Like even from a completely pragmatic prespective, it sounds like generating pictures of cats. Surely there is a never ending ocean of cat pictures which you can search and refine, do you really need to bring a hallucination machine into the mix? Maybe your friend has an extremely specific fetish list that nothing else will scratch? That's all I can think of.

[–] [email protected] 2 points 7 hours ago* (last edited 7 hours ago)

He says he uses it to do sexual roleplay chats, treats it kinda like a make-your-own-adventure porn story. I don't know if he's used it for images.

[–] [email protected] -2 points 5 hours ago (1 children)

Now imagine growing up where using your own words is less effective than having AI speak for you. Would you have not used AI as a kid when it worked better than your own words?

[–] [email protected] 5 points 4 hours ago (1 children)

Wdym “using your own words is less effective than having AI speak for you”? Learning how to express yourself and communicate with others is a crucial life skill, and if a kid struggles with that then they should receive the properly education and support to learn, not given an AI and told to just use that instead

[–] [email protected] 1 points 3 hours ago* (last edited 3 hours ago)

It is, and they should, but that doesn't mean they will. GenZ and GenA has notable communication and social issues rooted in the technologies of today. Those issue aren't stopping our use of social media, smart phones or tablets or stopping tech companies from doubling down on the technologies that cause the issues. I have no faith they will protect future children when they have refuse to protect present children.

What I mean is that much like parents who already put a tablet or TV in front of their kid to keep them occupied, parents will do the same with AI. When a kid is talking to an AI every day, they will learn to communicate their wants and needs to the AI. But AI has infinite patients, is always available, never makes their kid feel bad and can effectively infer and accurately assume the intent of a child from pattern recognizing communication that parents may struggle to understand. Every child would effectively develop a unique language for use with their AI co-parent that really only the AI understands.

This will happen naturally simply by exposure to AI that parents seem more than willing to allow as easily as tablets and smart phones and tv. Like siblings where one kid understands the other better that parent and translates those needs to the parent. Children raised on AI may end up communication to their caretakers better through the AI, just like the sibling, but worse. Their communication skills with people will suffer because more of their needs are getting met by communicating with AI. They practice communication with AI at the expense of communicating with people.

[–] [email protected] 81 points 17 hours ago (8 children)

The amount of times I've seen a question answered by "I asked chatgpt and blah blah blah" and the answer being completely bullshit makes me wonder who thinks asking the bullshit machine™ questions with a concrete answer is a good idea

[–] [email protected] 1 points 2 hours ago
[–] [email protected] 33 points 13 hours ago

This is your reminder that LLMs are associative models. They produce things that look like other things. If you ask a question, it will produce something that looks like the right answer. It might even BE the right answer, but LLMs care only about looks, not facts.

load more comments (6 replies)
load more comments
view more: next ›