Ah man, if there's one thing autistic kids love, it's the sudden and arbitrary removal of an object they depend on!
This is cool but will any of it explain the most pressing MrBeast question: why does he smile like that? I'm assuming it's because he's always thinking about how terrible a person he is.
Having a conscience? There's no career in that!
It's just a tool, like cars! My definition of tools is things that are being forced on us even though they're terrible for the environment and make everyone's life worse!
Spam machines are only ever funny or interesting by accident. The more they smooth out the wrinkles the more creatively useless they become. The tension is sort of fascinating.
Like I've always been interested in generative poetry and other manglings of text, and ChatGPT's so fucking dull compared to putting a sentence through babelfish a few times.
cool graph what's the x axis
Malcolm and Simone Collins with their children – Octavian George, four, Torsten Savage, two, and Titan Invictus, one – at home in Pennsylvania.
bye
What I find delightful about this is that I already wasn't impressed! Because, as the paper goes on to say
Moreover, although the UBE is a closed-book exam for humans, GPT-4’s huge training corpus largely distilled in its parameters means that it can effectively take the UBE “open-book”
And here I was thinking it not getting a perfect score on multiple-choice questions was already damning. But apparently it doesn't even get a particularly good score!
From Re-evaluating GPT-4’s bar exam performance (linked in the article):
First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population.
Ohhh, that is sneaky!
I love the way these idiots keep incrementing the number on their ChatGPT fantasy as if it's a sufficient image of the future and it's going to get everyone on board. Complete failure of imagination, don't try to picture any actual use for it or anything, just make it... more.
Oh well done, you added noise to a line going up!
Amoeba_Girl
0 post score0 comment score
To be honest, as someone who's very interested in computer generated text and poetry and the like, I find generic LLMs far less interesting than more traditional markov chains because they're too good at reproducing clichés at the exclusion of anything surprising or whimsical. So I don't think they're very good for the unfactual either. Probably a homegrown neural network would have better results.