MicroSlop's new xbox CEO has a background in AI and is worried about birthrates.
Can't wait for her lesswrong handle to leak.
MicroSlop's new xbox CEO has a background in AI and is worried about birthrates.
Can't wait for her lesswrong handle to leak.
Using talking points meant for c-suites to a general audience and outing yourself as a complete psychopath, the San Fran CEO Story.
I the post he keeps referring to Ollama as an LLM (it's a desktop app that runs a local server that lets you download and interface with a local LLM via CLI or http API) so it's possible he's just that far behind in his technical understanding of LLMs that he's fallen to taking the wrong people's word for it.
The post certainly reads like he doesn't even know which local LLM he's using, let alone what it takes to make one.
That he went from that all the way to it's mostly ok when sam altman steals all your data, misrepresents it and then steals all your traffic is... bad.
At any rate it's definitely good to know that that war crime forensics data project isn't quite the unintentional shambles corey makes it out to be.
That was a good read.
It's not "unethical" to scrape the web in order to create and analyze data-sets. That's just "a search engine"
Equivocating what LLMs do and what goes into LLM web scraping with "a search engine" is messed up. His article that he links about scraping is mostly about how badly copyright works and how analysing trade-secret-walled data can be beneficial both to consumers and science but occasionally bad for citizen privacy, which you'll recognize as mostly irrelevant to the concerns people tend to have against LLM training data providers ddosing the fuck out of everything, and all the rest of the stuff tante does a good job of explaining.
Corey also provides this anecdote:
As a group of human-rights defending forensic statisticians, HRDAG has always relied on cutting edge mathematics in its analysis. With its Colombia project, HRDAG used a large language model to assign probabilities for responsibility for each killing documented in the databases it analyzed.
That is, HRDAG was able to rigorously and legibly say, “This killing has an X% probability of having been carried out by a right-wing militia, a Y% probability of having been carried out by the FARC, and a Z% probability of being unrelated to the civil war.”
The use of large language models — produced from vast corpuses of scraped data — to produce accurate, thorough and comprehensible accounts of the hidden crimes that accompany war and conflict is still in its infancy. But already, these techniques are changing the way we hold criminals to account and bring justice to their victims.
Scraping to make large language models is good, actually.
what the actual shit
edit: I mean, he tried transformer powered voice-to-text and liked it, and now he's all in on the LLMs are a rigorous and accurate tool actually bandwagon?
Also the web scraping article is from 2023 but CD linked it in the recent pluralistic post so I assume his views haven't changed.
Timnit briefly weighs in about being included in the doc, apparently she regrets it and says the filmmakers "sprinkle some [AI skeptics] in like chocolate chips to perform ethics".
She also calls Yud a eugenicist cult leader with nothing to show for.

"not on squeaking terms"

by the way I first saw this in the stubsuck
transcript
I know this is about rationalism but the unexpanded uncapitalized "rat" name really makes this post. Imagining a world where this is a callout post about a community of rodents being racist. We're not on squeaking terms right now cause they're being problematic :/
Liuson told managers that AI “should be part of your holistic reflections on an individual’s performance and impact.”
who talks like this
So many low-hanging fruits. Unbelievable fruits. You wouldn’t believe how low they’re hanging.
In every RAG guide I've seen, the suggested system prompts always tended to include some more dignified variation of "Please for the love of god only and exclusively use the contents of the retrieved text to answer the user's question, I am literally on my knees begging you."
Also, if reddit is any indication, a lot of people actually think that's all it takes and that the hallucination stuff is just people using LLMs wrong. I mean, it would be insane to pour so much money into something so obviously fundamentally flawed, right?
This was such a chore to read, it's basically quirk-washing TREACLES. This is like a major publication deciding to take an uncritical look at scientology focusing on the positive vibes and the camaraderie, while stark in the middle of operation snow white, which in fact I bet happened a lot at the time.
The doomer scene may or may not be a delusional bubble—we’ll find out in a few years
Fuck off.
The doomers are aware that some of their beliefs sound weird, but mere weirdness, to a rationalist, is neither here nor there. MacAskill, the Oxford philosopher, encourages his followers to be “moral weirdos,” people who may be spurned by their contemporaries but vindicated by future historians. Many of the A.I. doomers I met described themselves, neutrally or positively, as “weirdos,” “nerds,” or “weird nerds.” Some of them, true to form, have tried to reduce their own weirdness to an equation. “You have a set amount of ‘weirdness points,’ ” a canonical post advises. “Spend them wisely.”
The weirdness is eugenics and the repugnant conclusion, and abusing bayes rule to sidestep context and take epistimological shortcuts to cuckoo conclusions while fortifying a bubble of accepted truths that are strangely amenable to allowing rich people to do whatever the hell they want.
Writing a 7-8000 word insider expose on TREACLES without mentioning eugenics even once throughout should be all but impossible, yet here we are.
I mean, sure, but it's still the CEO of XBOX on her second day on the job throwing her hat in the legendarily sus declining birthrates discourse in service of AI solutionism, it's not nothing.