657
submitted 1 year ago by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 4 points 1 year ago

Why is that a criticism? This is how it works for humans too: we study, we learn the stuff, and then try to recall it during tests. We've been trained on the data too, for neither a human nor an ai would be able to do well on the test without learning it first.

This is part of what makes ai so "scary" that it can basically know so much.

[-] [email protected] 23 points 1 year ago

Dont anthropomorphise. There is quite the difference between a human and an advanced lookuptable.

load more comments (14 replies)
[-] [email protected] 18 points 1 year ago

LLMs know nothing. literally. they cannot.

[-] [email protected] 11 points 1 year ago

Because a machine that "forgets" stuff it reads seems rather useless... considering it was a multiple choice style exam and, as a machine, Chat GPT had the book entirely memorized, it should have scored perfect almost all the time.

[-] [email protected] 0 points 1 year ago

Chat GPT had the book entirely memorized

I feel like this exposes a fundamental misunderstanding of how LLMs are trained.

[-] [email protected] 12 points 1 year ago

They're auto complete machines. All they fundamentally do is match words together. If it was trained on the answers and still couldn't reproduce the correct word matches, it failed.

load more comments (8 replies)
load more comments (1 replies)
this post was submitted on 15 May 2024
657 points (100.0% liked)

TechTakes

1904 readers
200 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS