[-] [email protected] 30 points 6 months ago* (last edited 6 months ago)

To be honest, as someone who's very interested in computer generated text and poetry and the like, I find generic LLMs far less interesting than more traditional markov chains because they're too good at reproducing clichés at the exclusion of anything surprising or whimsical. So I don't think they're very good for the unfactual either. Probably a homegrown neural network would have better results.

[-] [email protected] 36 points 6 months ago

Ah man, if there's one thing autistic kids love, it's the sudden and arbitrary removal of an object they depend on!

[-] [email protected] 33 points 8 months ago

This is cool but will any of it explain the most pressing MrBeast question: why does he smile like that? I'm assuming it's because he's always thinking about how terrible a person he is.

[-] [email protected] 38 points 8 months ago

Having a conscience? There's no career in that!

[-] [email protected] 42 points 9 months ago

It's just a tool, like cars! My definition of tools is things that are being forced on us even though they're terrible for the environment and make everyone's life worse!

[-] [email protected] 38 points 1 year ago

Spam machines are only ever funny or interesting by accident. The more they smooth out the wrinkles the more creatively useless they become. The tension is sort of fascinating.

Like I've always been interested in generative poetry and other manglings of text, and ChatGPT's so fucking dull compared to putting a sentence through babelfish a few times.

[-] [email protected] 46 points 1 year ago

cool graph what's the x axis

[-] [email protected] 44 points 1 year ago

Malcolm and Simone Collins with their children – Octavian George, four, Torsten Savage, two, and Titan Invictus, one – at home in Pennsylvania.

bye

[-] [email protected] 125 points 1 year ago

What I find delightful about this is that I already wasn't impressed! Because, as the paper goes on to say

Moreover, although the UBE is a closed-book exam for humans, GPT-4’s huge training corpus largely distilled in its parameters means that it can effectively take the UBE “open-book”

And here I was thinking it not getting a perfect score on multiple-choice questions was already damning. But apparently it doesn't even get a particularly good score!

[-] [email protected] 171 points 1 year ago

From Re-evaluating GPT-4’s bar exam performance (linked in the article):

First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population.

Ohhh, that is sneaky!

[-] [email protected] 53 points 1 year ago* (last edited 1 year ago)

I love the way these idiots keep incrementing the number on their ChatGPT fantasy as if it's a sufficient image of the future and it's going to get everyone on board. Complete failure of imagination, don't try to picture any actual use for it or anything, just make it... more.

[-] [email protected] 84 points 1 year ago

Oh well done, you added noise to a line going up!

view more: next ›

Amoeba_Girl

0 post score
0 comment score
joined 2 years ago