[-] scruiser@awful.systems 18 points 2 months ago

You know, it makes the exact word choices Eliezer chose on this post: https://awful.systems/post/6297291 much more suspicious. "To the best of my knowledge, I have never in my life had sex with anyone under the age of 18." So maybe he didn't know they were underage at the time?

[-] scruiser@awful.systems 17 points 3 months ago* (last edited 3 months ago)

TracingWoodgrains's hit piece on David Gerard (the 2024 one, not the more recent enemies list one, where David Gerard got rated above the Zizians as lesswrong's enemy) is in the top 15 for lesswrong articles from 2024, currently rated at #5! https://www.lesswrong.com/posts/PsQJxHDjHKFcFrPLD/deeper-reviews-for-the-top-15-of-the-2024-review

It's nice to see that with all the lesswrong content about AI safety and alignment and saving the world and human rationality and fanfiction, an article explaining about how terrible David Gerard is (for... checks notes, demanding proper valid sources about lesswrong and adjacent topics on wikipedia) won out to be voted above them! Let's keep up our support for dgerard!

[-] scruiser@awful.systems 18 points 4 months ago

Another day, another instance of rationalists struggling to comprehend how they've been played by the LLM companies: https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy

A very long, detailed post, elaborating very extensively the many ways Anthropic has played the AI doomers, promising AI safety but behaving like all the other frontier LLM companies, including blocking any and all regulation. The top responses are all tone policing and such denying it in a half-assed way that doesn't really engage with the fact the Anthropic has lied and broken "AI safety commitments" to rationalist/lesswrongers/EA shamelessly and repeatedly:

https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=tBTMWrTejHPHyhTpQ

I feel confused about how to engage with this post. I agree that there's a bunch of evidence here that Anthropic has done various shady things, which I do think should be collected in one place. On the other hand, I keep seeing aggressive critiques from Mikhail that I think are low-quality (more context below), and I expect that a bunch of this post is "spun" in uncharitable ways.

https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=CogFiu9crBC32Zjdp

I think it's sort of a type error to refer to Anthropic as something that one could trust or not. Anthropic is a company which has a bunch of executives, employees, board members, LTBT members, external contractors, investors, etc, all of whom have influence over different things the company does.

I would find this all hilarious, except a lot of the regulation and some of the "AI safety commitments" would also address real ethical concerns.

[-] scruiser@awful.systems 17 points 5 months ago

Continuation of the lesswrong drama I posted about recently:

https://www.lesswrong.com/posts/HbkNAyAoa4gCnuzwa/wei-dai-s-shortform?commentId=nMaWdu727wh8ukGms

Did you know that post authors can moderate their own comments section? Someone disagreeing with you too much but getting upvoted? You can ban them from your responding to your post (but not block them entirely???)! And, the cherry on top of this questionable moderation "feature", guess why it was implemented? Eliezer Yudkowsky was mad about highly upvoted comments responding to his post that he felt didn't get him or didn't deserve that, so instead of asking moderators to block on a case-by-case basis (or, acasual God forbid, consider maybe if the communication problem was on his end), he asked for a modification to the lesswrong forums to enable authors to ban people (and delete the offending replies!!!) from their posts! It's such a bizarre forum moderation choice, but I guess habryka knew who the real leader is and had it implemented.

Eliezer himself is called to weigh in:

It's indeed the case that I haven't been attracted back to LW by the moderation options that I hoped might accomplish that. Even dealing with Twitter feels better than dealing with LW comments, where people are putting more effort into more complicated misinterpretations and getting more visibly upvoted in a way that feels worse. The last time I wanted to post something that felt like it belonged on LW, I would have only done that if it'd had Twitter's options for turning off commenting entirely.

So yes, I suppose that people could go ahead and make this decision without me. I haven't been using my moderation powers to delete the elaborate-misinterpretation comments because it does not feel like the system is set up to make that seem like a sympathetic decision to the audience, and does waste the effort of the people who perhaps imagine themselves to be dutiful commentators.

Uh, considering his recent twitter post... this sure is something. Also" "it does not feel like the system is set up to make that seem like a sympathetic decision to the audience" no shit sherlock, deleting a highly upvoted reply because it feels like too much effort to respond to is in fact going to make people unsympathetic (at the least).

[-] scruiser@awful.systems 20 points 5 months ago* (last edited 5 months ago)

“I’m sort of a complex chaotic systems guy, so I have a low estimate that I actually know what the nonlinear dynamic in the memosphere really was,” he said. (Translation: It’s complicated.)

Why do these people have the urge to talk like this? Does it make themselves feel smarter? Do they think it makes them look smart to other people? Are they so caught up in their field they can't code switch to normal person talk?

[-] scruiser@awful.systems 19 points 10 months ago

We barely understsnd how LLMs actually work

I would be careful how you say this. Eliezer likes to go on about giant inscrutable matrices to fearmoner, and the promptfarmers use the (supposed) mysteriousness as another avenue for crithype.

It's true reverse engineering any specific output or task takes a lot of effort and requires access to the model's internals weights and hasn't been done for most tasks, but the techniques exist for doing so. And in general there is a good high level conceptual understanding of what makes LLMs work.

which means LLMs don’t understand their own functioning (not that they “understand” anything strictly speaking).

This part is absolutely true. If you catch them in mistake, most of their data about responding is from how humans respond, or, at best fine-tuning on other LLM output and they don't have any way of checking their own internals, so the words they say in response to mistakes is just more bs unrelated to anything.

[-] scruiser@awful.systems 20 points 10 months ago
  • "tickled pink" is a saying for finding something humorous

  • "BI" is business insider, the newspaper that has the linked article

  • "chuds" is a term of online alt-right losers

  • OFC: of fucking course

  • "more dosh" mean more money

  • "AI safety and alignment" is the standard thing we sneer at here: making sure the coming future acasual robot god is a benevolent god. Occasionally reporter misunderstand it to mean or more PR-savvy promptfarmers misrepresent it to mean stuff like stopping LLMs from saying racist shit or giving you recipes that would accidentally poison you but this isn't it's central meaning. (To give the AI safety and alignment cultists way too much charity, making LLMs not say racist shit or give harmful instructions has been something of a spin-off application of their plans and ideas to "align" AGI.)

[-] scruiser@awful.systems 20 points 11 months ago

It was a thing in the sense that the promptfondlers were trying to portray prompting as a matter of fine technique and skill (as opposed to dumb luck mixed with trial and error with a few general guidelines that half work). It was not a thing then and is still not now in the sense that prompting has none of the skill or precision or verifiability or reliability of actual programming.

[-] scruiser@awful.systems 18 points 1 year ago

Keep in mind the author isn't just (or even primarily) counting ultra wealth and establishment politicians as "elites", they are also including scientists trying to educate the public on their area of expertise (i.e. COVID, Global Warming, Environmentalism, etc.), and sociologists/psychologists explaining problems the author wants to ignore or are outright in favor of (racism/transphobia/homophobia).

[-] scruiser@awful.systems 18 points 1 year ago* (last edited 1 year ago)

This isn't debate club or men of science hour, this is a forum for making fun of idiocy around technology. If you don't like that you can leave (or post a few more times for us to laugh at before you're banned).

As to the particular paper that got linked, we've seen people hyping LLMs misrepresent their research as much more exciting than it actually is (all the research advertising deceptive LLMs for example) many many times already, so most of us weren't going to waste time to track down the actual paper (and not just the marketing release) to pick apart the methods. You could say (raises sunglasses) our priors on it being bullshit were too strong.

[-] scruiser@awful.systems 19 points 1 year ago

Lol, Altman's AI generated purple prose slop was so bad even Eliezer called it out (as opposed to make a doomer-hype point):

Perhaps you have found some merit in that obvious slop, but I didn't; there was entropy, cliche, and meaninglessness poured all over everything like shit over ice cream, and if there were cherries underneath I couldn't taste it for the slop.

[-] scruiser@awful.systems 17 points 2 years ago

First of all. You could make facts a token value in an LLM if you had some pre-calculated truth value for your data set.

An extra bit of labeling on your training data set really doesn't help you that much. LLMs already make up plausible looking citations and website links (and other data types) that are actually complete garbage even though their training data has valid citations and website links (and other data types). Labeling things as "fact" and forcing the LLM to output stuff with that "fact" label will get you output that looks (in terms of statistical structure) like valid labeled "facts" but have absolutely no guarantee of being true.

view more: ‹ prev next ›

scruiser

0 post score
0 comment score
joined 2 years ago