this post was submitted on 11 Feb 2025
523 points (98.7% liked)

Technology

62161 readers
3738 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 89 points 2 days ago (4 children)

As always, never rely on llms for anything factual. They're only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)

[–] [email protected] 25 points 2 days ago (1 children)

I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn't need that thing included

Sorry for being vague, I just didn't want to post my home town on here

[–] [email protected] 11 points 2 days ago

You can say Space Needle. We get it.

[–] [email protected] 13 points 2 days ago (1 children)

The issue for RPGs is that they have such "small" context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later

Although, similar to how deepseek uses two stages ("how would you solve this problem", then "solve this problem following this train of thought"), you could have an input of recent conversations and a private/unseen "notebook" which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn't be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things

[–] [email protected] 13 points 2 days ago

The problem is that the "train of the thought" is also hallucinations. It might make the model better with more compute but it's diminishing rewards.

Rpg can use the llms because they're not critical. If the llm spews out nonsense you don't like, you just ask to redo, because it's all subjective.

[–] [email protected] 5 points 2 days ago

Or at least as an assistant on a field your an expert in. Love using it for boilerplate at work (tech).

[–] [email protected] 2 points 2 days ago (1 children)

Nonsense, I use it a ton for science and engineering, it saves me SO much time!

[–] [email protected] 2 points 2 days ago (1 children)

Do you blindly trust the output or is it just a convenience and you can spot when there's something wrong? Because I really hope you don't rely on it.

[–] [email protected] 5 points 2 days ago (2 children)

How could I blindly trust anything in this context?

[–] [email protected] 7 points 1 day ago (1 children)

Y'know, a lot of the hate against AI seems to mirror the hate against Wikipedia, search engines, the internet, and even computers in the past.

Do you just blindly believe whatever it tells you?

It's not absolutely perfect, so it's useless.

It's all just garbage information!

This is terrible for jobs, society, and the environment!

[–] [email protected] 6 points 1 day ago

You know what... now that you say it, it really is just like the anti-Wikipedia stuff.

[–] [email protected] 0 points 1 day ago (1 children)

In which case you probably aren't saving time. Checking bullshit is usually harder and longer to just research shit yourself. Or should be, if you do due diligence

[–] [email protected] 3 points 1 day ago (1 children)

Its nice that you inform people that they cant tell if something is saving them time or not without knowing what their job is or how they are using a tool.

[–] [email protected] -2 points 1 day ago (1 children)

If they think AI is working for them then he can. If you think AI is an effective tool for any profession you are a clown. If my son's preschool teacher used it to make a lesson plan she would be incompetent. If a plumber asked what kind of wrench he needed he would be kicked out of my house. If an engineer of one of my teams uses it to write code he gets fired.

AI "works" because you're asking questions you don't know and it's just putting words together so they make sense without regard to accuracy. It's a hard limit of "AI" that we've hit. It won't get better in our lifetimes.

[–] [email protected] 1 points 9 hours ago (1 children)

Anyone blindly saying a tool is ineffective for every situation that exists in the world is a tool themselves.

[–] [email protected] 0 points 7 hours ago

Lame platitude