this post was submitted on 13 May 2025
455 points (100.0% liked)

TechTakes

1858 readers
399 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 1 day ago* (last edited 1 day ago) (13 children)

ah yes, my ability to read a pdf immediately confers upon me all the resources required to engage in materially equivalent experimentation of the thing that I just read! no matter whether the publisher spent cents or billions in the execution and development of said publication, oh no! it is so completely a cost paid just once, and thereafter it's ~totally~ free!

oh, wait, hang on. no. no it's the other thing. that one where all the criticisms continue to hold! my bad, sorry for mistaking those. guess I was roleplaying a LLM for a moment there!

[–] [email protected] -4 points 1 day ago* (last edited 1 day ago) (12 children)

You can experiment on your own GPU by running the tests using a variety of models of different generations (LLAMA 2 class 7B, LLAMA 3 class 7B, Gemma, Granite, Qwen, etc...)

Even the lowest end desktop hardware can run atleast 4B models. The only real difficulty is scripting the test system, but the papers are usually helpful with describing their test methodology.

[–] [email protected] 6 points 23 hours ago (4 children)

👨🏿‍🦲: how many billions of models are you on

🗿: like, maybe 3, or 4 right now my dude

👨🏿‍🦲: you are like a little baby

👨🏿‍🦲: watch this

glue pizza

[–] [email protected] -3 points 23 hours ago* (last edited 23 hours ago) (1 children)

The most recent Qwen model supposedly works really well for cases like that, but this one I haven't tested for myself and I'm going based on what some dude on reddit tested

[–] [email protected] 6 points 23 hours ago (1 children)

Good for what? Glue pizza? Unnerving/creepy pasta?

[–] [email protected] -5 points 23 hours ago* (last edited 23 hours ago) (1 children)

Not making these famous logical errors

For example, how many Rs are in Strawberry? Or shit like that

(Although that one is a bad example because token based models will fundamentally make such mistakes. There is a new technique that lets LLMs process byte level information that fixes it, however)

[–] [email protected] 5 points 23 hours ago

oh, I get it, you personally choose not to make these structurally-repeatable-by-foundation errors? you personally choose to be a Unique And Correct Snowflake?

wow shit damn, I sure want to read your eventual uni paper, see what kind of distinctly novel insight you've had to wrangle this domain!

load more comments (2 replies)
load more comments (9 replies)
load more comments (9 replies)