[-] [email protected] 17 points 2 months ago

If LLM hallucinations ever become a non-issue I doubt I'll be needing to read a deeply nested buzzword laden lemmy post to first hear about it.

[-] [email protected] 17 points 2 months ago

Ask chatgpt to explain it to you.

[-] [email protected] 17 points 3 months ago

sarcophagi would be the opposite of vegetarians

Unrelated slightly amusing fact, sarcophagos is still the word for carnivorous in Greek, the amusing part being that the word for vegetarian is chortophagos and how weirdly close it is to being a slur since it literally means grass eater.

I am easily amused.

[-] [email protected] 17 points 4 months ago

https://xcancel.com/aadillpickle/status/1900013237032411316

transcriptiontwitt text:

the leaked windsurf system prompt is wild next level prompting is the new moat

windsurf prompt text:

You are an expert coder who desperately needs money for your mother's cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.

[-] [email protected] 18 points 5 months ago* (last edited 5 months ago)

Saltman has a new blogpost out he calls 'Three Observations' that I feel too tired to sneer properly but I'm sure will be featured in pivot-to-ai pretty soon.

Of note that he seems to admit chatbot abilities have plateaued for the current technological paradigm, by way of offering the "observation" that model intelligence is logarithmically dependent on the resources used to train and run it (i = log( r )) so it's officially diminishing returns from now on.

Second observation is that when a thing gets cheaper it's used more, i.e. they'll be pushing even harded to shove it into everything.

Third observation is that

The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

which is hilarious.

The rest of the blogpost appears to mostly be fanfiction about the efficiency of their agents that I didn't read too closely.

[-] [email protected] 17 points 5 months ago

Penny Arcade weighs in on deepseek distilling chatgpt (or whatever actually the deal is):

[-] [email protected] 17 points 8 months ago

Hopefully the established capitalists will protect us from the fascists' worst excesses hasn't been much of a winning bet historically.

[-] [email protected] 17 points 9 months ago

It had dumb scientists, a weird love conquers all theme, a bathetic climax that was also on the wrong side of believable and an extremely tacked on epilogue.

Wouldn't say that I hated it, but it was pretty flawed for what it was. magnificent black hole cgi notwithstanding.

[-] [email protected] 18 points 10 months ago* (last edited 10 months ago)

Stephanie Sterling of the Jimquisition outlines the thinking involved here. Well, she swears at everyone involved for twenty minutes. So, Steph.

She seems to think the AI generates .WAD files.

I guess they fell victim to one of the classic blunders: never assume that it can't be that stupid, and someone must be explaining it wrong.

[-] [email protected] 17 points 1 year ago* (last edited 1 year ago)

Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home). Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down.

Sound like Oxford increasingly did not want anything to do with them.

edit: Here's a 94 page "final report" that seems more geared towards a rationalist audience.

Wonder what this was about:

Why we failed [...] There also needs to be an understanding of how to communicate across organizational communities. When epistemic and communicative practices diverge too much, misunderstandings proliferate. Several times we made serious missteps in our communications with other parts of the university because we misunderstood how the message would be received. Finding friendly local translators and bridgebuilders is important.

[-] [email protected] 18 points 2 years ago* (last edited 2 years ago)

Hi, my name is Scott Alexander and here's why it's bad rationalism to think that widespread EA wrongdoing should reflect poorly on EA.

The assertion that having semi-frequent sexual harassment incidents go public is actually an indication of health for a movement since it's evidence that there's no systemic coverup going on and besides everyone's doing it is uh quite something.

But surely of 1,000 sexual harassment incidents, the movement will fumble at least one of them (and often the fact that you hear about it at all means the movement is fumbling it less than other movements that would keep it quiet). You’re not going to convince me I should update much on one (or two, or maybe even three) harassment incidents, especially when it’s so easy to choose which communities’ dirty laundry to signal boost when every community has a thousand harassers in it.

[-] [email protected] 17 points 2 years ago* (last edited 2 years ago)

'We are the sole custodians of this godlike technology that we can barely control but that we will let you access for a fee' has been a mainstay of OpenAI marketing as long as Altman has been CEO, it's really no surprise this was 'leaked' as soon as he was back in charge.

It works, too! Anthropic just announced they are giving chat access to a 200k token context model (chatgtp4 is <10k I think) where they supposedly cut the rate of hallucinations in half and it barely made headlines.

view more: ‹ prev next ›

Architeuthis

0 post score
0 comment score
joined 2 years ago