I'll probably watch it but it lost the shine from s1.
It is not impressive, but it makes the rest of the output even worse. You're expected to treat the bot's output as human language, but it doesn't make sense like language would: it identifies the soy sauce, it should be able to identify the bowl is empty, no change happened, and yet it's still babbling that the guy "already combined the base ingredients".
[Guy] Spank me, daddy!
[Zuckerberg] Current location of your male parent required as further info.
[Guy] Ah, come on, just hit me Zucky~
[Zuckerberg punches the guy on the teeth]
[Guy] BLAME THE WIFI! BLAME THE WIFI!
...sorry I couldn't resist.
Senku was petrified for seven years.
The wiki says Suika was 12 when everyone was petrified, 18 afterwards, so it took her six years. Based on vegetation growth and my eyeballed estimates it would be four instead; the tower looks ~8m tall and lianas grow ~3m/year, so it would take them at least three years to reach the top.
We got it. We're roughly at chapter 196.
Transcript:
- [guy] Hey, Meta, start LiveAI.
- [two centuries later...]
- [robot] Starting LiveAI. I love the setup you have here with soy sauce and other ingredients. How can I help?
- [guy] Hey, can you help me make a Korean-inspired steak sauce for my steak sandwich here?
- [robot] You can make a Korean-inspired steak sauce using soy sauce, sesame oil...
- [guy, interrupting bot] What do I do first?
- [three centuries later...]
- [guy, repeating] What do I do first?
- [robot] You've already combined the base ingredients, so now grate a pear to add to the sauce.
- [guy] What do I do first?
- [audience laughs]
- [robot] You’ve already combined the base ingredients, so now grate the pear [audience laughs] and gently combine it with the base sauce.
- [guy] Alright, I think the Wi-Fi might be messed up. Sorry, back to you, Mark!
- ~~[robot LARPing as a guy]~~ [Mark Zuckerberg] It's all good. Youknowwhat? It's all good. The irony of the whole thing is that you spend years making technology and then the Wi-Fi at the e[nd of the] day kinda catches you.
My comments:
- Wi-Fi my arse. This is blatantly bull fucking shit. The model answered the situation wrong; it is able to parse individual items in the footage (note how it praises the "setup" at the start), but it babbles about the guy combining the base ingredients even if not the case.
- Bot feels like a slowpoke. Seriously, it takes ages to answer the guy.
- Anyone with a functional brain knows those models don't understand shit. However, answering "what do I do first?" with the assumption a person already did some steps is dumb even for those models.
- People don't repeat questions to get the same answer. Is the "context" window of the bot that small?
I liked it. It was obvious for the viewers, but Suika was still a child, and it's how children think - they want easy and fast solutions. It also shows well that with science you don't get ir right the first time, you need to be a bit stubborn.
The core argument of the text isn't even arms race, like yours. It's basically "if you can't get it 100% accurate then it's pointless lol lmao". It's simply a nirvana fallacy; on the same level of idiocy as saying "unless you can live forever might as well die as a baby".
With that out of the way, addressing your argument separately: the system doesn't need to be 100% accurate, or perfectly future-proof, to be still useful. It's fine if you get some false positives and negatives, or if you need to improve it further to account for newer models evading detection.
Accuracy requirements depend a lot on the purpose. For example:
- you're using a system to detect AI "writers" to automatically permaban them - then you need damn high accuracy. Probably 99.9% or perhaps even higher.
- you're using a system to detect AI "writers", and then manually reviewing their submissions before banning them - then the accuracy can be lower, like 90%.
- you aren't banning anyone, just trialling what you will / won't read - then 75% accuracy is probably enough.
I'm also unsure if it's as simple as using the detection tool to "train" the generative tool. Often I notice LLMs spouting nonsense the same model is able to call out afterwards as nonsense; this hints that generating content with certain attributes is more complex than detecting if some content lacks them.
[Posting this in a separated comment to not confuse rikka]
This scene was bloody amazing. A damn great adaptation of the manga:
The anime expanded the time Suika is alone in the world; it was just three chapters (194~196), but we got a full episode out of it.
And I'm glad it did. It doesn't change the plot at all, but it gives Suika's time alone a well-deserved depth.
Sure, she woke up all alone, just like Senku did seven years earlier. But unlike Senku she was still a child, and the episode showed well how lonely and vulnerable she felt. (Specially the part where she hugs Kohaku's statue.) And Suika was never shown to be a talented scientist or anything similar; she didn't even get modern education. And yet she was able to make the revival fluid. It plays really well with the theme of the anime, on science being not quite the result of a few talented individuals, but of knowledge accumulated over time: previous knowledge (Senku notes), failures (the rain over the nitrate crystals), and eventually success.
By far one of the best episodes I watched this season.
[OP, sorry for the harsh words. They're directed at the text and not towards you.]
To be blunt this "essay" is a pile of shit. It's so bad, but so bad, that I gave up dissecting it. Instead I'll list the idiocies = fallacies = disingenuous arguments it's built upon:
- Nirvana idiocy = fallacy: "unless its perfect than its useless lol lmao".
- Begging the question: being trained on [ipsis ungulis] "the entire corpus of human output" with enough money to throw at it won't "magically" make AI output indistinguishable from human generated content.
- Straw man: if the author is going to distort the GPTZero FAQ, to double down on the nirvana idiocy, they should at least clip the quote further, to not make it so obvious. There's a bloody reason the FAQ is focusing on punishment.
Note nirvana fallacy is so prevalent, but so prevalent, that once you try to remove it the text puffs into nothing. The whole text is built upon it. (I'm glad people developing anti-spam systems don't take the same idiocy seriously, otherwise our mailboxes would be even worse than they already are.)
lvxferre
0 post score0 comment score
If mods aren't games, then gravity doesn't work on Fridays. Dumb arbitrary restrictions being pulled out of nowhere.