view the rest of the comments
AMUSING, INTERESTING, OUTRAGEOUS, or PROFOUND
This is a page for anything that's amusing, interesting, outrageous, or profound.
♦ ♦ ♦
RULES
❶ Each player gets six cards, except the player on the dealer's right, who gets seven.
❷ Posts, comments, and participants must be amusing, interesting, outrageous, or profound.
❸ This page uses Reverse Lemmy-Points™, or 'bad karma'. Please downvote all posts and comments.
❹ Posts, comments, and participants that are not amusing, interesting, outrageous, or profound will be removed.
❺ This is a non-smoking page. If you must smoke, please click away and come back later.
❻ Don't be a dick.
Please also abide by the instance rules.
♦ ♦ ♦
Can't get enough? Visit my blog.
♦ ♦ ♦
Please consider donating to Lemmy and Lemmy.World.
$5 a month is all they ask — an absurdly low price for a Lemmyverse of news, education, entertainment, and silly memes.
Also, let me point out they didn't properly grade the bar exam: https://www.livescience.com/technology/artificial-intelligence/gpt-4-didnt-ace-the-bar-exam-after-all-mit-research-suggests-it-barely-passed
It did excellent on the multiple choice section, but so would literally any law student using Google.
And that's not the only lie. It can't even repeat stuff we already know. I occasionally give a model one of my own, by new decades old, papers without the abstract and conclusions and asked what it could conclude. It got it completely wrong. Like not-even-funny wrong, wrong conclusions, wrong theory, wrong methodology.
It's pretty fun to see AI boosters get upset at that and blame my paper for the LLM saying literally the opposite of what it says.