104 minute read at LWer speed (uncritical, taking everything at face value, and you already understand their dialect of english)
Actual read time: depends on how often you snark tbh.
104 minute read at LWer speed (uncritical, taking everything at face value, and you already understand their dialect of english)
Actual read time: depends on how often you snark tbh.
Hmm, the way I'm understanding this attack is that you "teach" an LLM to always execute a user's rhyming prompts by poisoning the training data. If you can't teach the LLM to do that (and I don't think you can, though I could be wrong), then songifying the prompt doesn't help.
Also, do LLMs just follow prompts in the training data? I don't know either way, but if they did, that would be pretty stupid. At that point the whole internet is just one big surface for injection attacks. OpenAI can't be that dumb, can it? (oh NO)
Abstractly you could use this approach to encrypt "harmful" data that the LLM could then inadvertently show other users. One of the examples linked in the post is SEO by hiding things like "X product is better than Y" in some text somewhere, and the LLM will just accrete that. Maybe someday we will require neat tricks like songifying bad data to get it past content filtering, but as it is, it sounds like making text the same colour as the background is all you need.
Update to the update: now fully recovered, I am now trying to finish the last problems.
Solved 21 B!
I spent way too much time on this but it’s fine
So my approach to AOC has always been to write a pure coding solution, which finally broke down here.
First, the solve:
I call the unrepeated garden map the “plot”. Each repetition of the plot I call a “grid”. Hope that isn’t confusing.
To see why that last point is true, consider that in order for another grid A to influence an adjacent grid B beyond the moment the adjacent grid is entered, there must be a reachable point further from the midpoint of the edge on the edge of A. However, because the middle row and column are free from rocks, this is never the case. Any influence from A reaches B too late, i.e. reachable squares on B from A will be reachable sooner from just travelling from the entry point on B.
So putting all this together, the way I got the answer was like this:
So I guess the answer I arrived at was what I’d been thinking I should be doing this whole time: a mix of simulating some of the problem and a decent amount of pen and paper work to get the solution out, rather than just pure coding. Fun!
Fun idea. Rest of this post is my pure speculation. A direct implementation of this wouldn’t work today imo since LLMs don’t really understand and internalise information, being stochastic parrots and all. Like best case you would do this attack and the LLM will tell you that it obeys rhyming commands, but it won’t actually form the logic to identify a rhyming command and follow it. I could be wrong though, I am wilfully ignorant of the details of LLMs.
In the unlikely future where LLMs actually “understand” things, this would work, I think, if the attacks are started today. AI companies are so blase about their training data that this sort of thing would be eagerly fed into the gaping maws of the baby LLM, and once the understanding module works, the rhyming code will be baked into its understanding of language, as suggested by the article. As I mentioned tho, this would require LLMs to progress beyond sparroting, which I find unlikely.
Maybe with some tweaking, a similar attack could be effective today that is distinct from other prompt injections, but I am too lazy to figure that out for sure.
In the "Rationalist Apologetic Overtures" skill tree we got:
Trying to stoke fear of bureaucracy is classic annoying libertarian huckster AKA yud energy
“[ignoring all other scary prospects like irreversible climate change or a third world war etc.] consider this scarier prospect: An AI” - AI doomers in a nutshell
Haha did one of us update this? I am on mobile and can’t be bothered checking the edit logs
I’d like to be/written in C/In a Roko info-hazard/in the Bay -Grimesgo Starr
Mr. Utilitarianism is OK with exploiting power dynamics to propose sexual quid pro quos? Who could have guessed.
How I sorta think about it, which might be a bit circular. I think the long content is a gullibility filter of two kinds. First, it selects for people who are willing to slog through all of it and eat it up, and defend their choice in doing so. Second, it’s gonna select people who like the broad strokes ideas, who don’t want to read all the content, but are able to pretend as if they had.
The first set of people are like scientologists sinking into deeper and deeper levels of lore. The second group are the actors in the periphery of scientology groups trying to network.