[-] [email protected] 7 points 1 month ago* (last edited 1 month ago)

A real modest {~~brunch~~|bunch}

[-] [email protected] 7 points 2 months ago

NASA programmers grow more powerful by the day. It’s only a matter of time before they reach AGI

[-] [email protected] 7 points 3 months ago

Is Japanese really that strict

my Japanese uncle that works at nintendo says yes. If you write わ instead of は they make you 切腹 in front of all your friends

[-] [email protected] 7 points 5 months ago
[-] [email protected] 7 points 5 months ago

Yeah weird thing to eliminate from a city, or weird thing to see without context. Basically: Wrongers try and envision a better world without deleting the parts of human experience that make it meaningful or worthwhile challenge (impossible)

[-] [email protected] 7 points 1 year ago

that's a big oof for me dawg

[-] [email protected] 7 points 1 year ago

Haha I had that drafted and decided that, in the spirit of the post, I’d write something less good

[-] [email protected] 7 points 1 year ago

has been happening

Yes absolutely, just now it’s A C C E L E R A T I N G

[-] [email protected] 7 points 1 year ago

Funny, when I ask chatgpt to draw a rationalist, the same thing pops up.

[-] [email protected] 7 points 1 year ago

Reading the OP wiki article looks exhausting. Is it just the ontological argument but for juicy computing? As in, juicy enough for brain simulation or AGI or whatever.

Not willing to look into the abyss tonight, basically.

[-] [email protected] 7 points 2 years ago

Fun idea. Rest of this post is my pure speculation. A direct implementation of this wouldn’t work today imo since LLMs don’t really understand and internalise information, being stochastic parrots and all. Like best case you would do this attack and the LLM will tell you that it obeys rhyming commands, but it won’t actually form the logic to identify a rhyming command and follow it. I could be wrong though, I am wilfully ignorant of the details of LLMs.

In the unlikely future where LLMs actually “understand” things, this would work, I think, if the attacks are started today. AI companies are so blase about their training data that this sort of thing would be eagerly fed into the gaping maws of the baby LLM, and once the understanding module works, the rhyming code will be baked into its understanding of language, as suggested by the article. As I mentioned tho, this would require LLMs to progress beyond sparroting, which I find unlikely.

Maybe with some tweaking, a similar attack could be effective today that is distinct from other prompt injections, but I am too lazy to figure that out for sure.

[-] [email protected] 7 points 2 years ago

I’d like to be/written in C/In a Roko info-hazard/in the Bay -Grimesgo Starr

view more: ‹ prev next ›

swlabr

0 post score
0 comment score
joined 2 years ago