scruiser

joined 2 years ago
[–] [email protected] 5 points 3 hours ago

My understanding is that it is possible to reliably (given the reliability required for lab animals) insert genes for individual proteins. I.e. if you want a transgenetic mouse line that has neurons that will fluoresce under laser light when they are firing, you can insert a gene sequence for GCaMP without too much hassle. You can even get the inserted gene to be under the control of certain promoters so that it will only activate in certain types of neurons and not others. Some really ambitious work has inserted multiple sequences for different colors of optogenetic indicators into a single mouse line.

If you want something more complicated that isn't just a sequence for a single protein or at most a few protein, never mind something nebulous on the conceptual level like "intelligence" then yeah, the technology or even basic scientific understanding is lacking.

Also, the gene insertion techniques that are reliable enough for experimenting on mice and rats aren't nearly reliable enough to use on humans (not that they even know what genes to insert in the first place for anything but the most straightforward of genetic disorders).

[–] [email protected] 10 points 23 hours ago

One comment refuses to leave me: https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=C7MvCZHbFmeLdxyAk

The commenter makes and extended tortured analogy to machine learning... in order to say that maybe genes with correlations to IQ won't add to IQ linearly. It's an encapsulation of many lesswrong issues: veneration of machine learning, overgeneralizing of comp sci into unrelated fields, a need to use paragraphs to say what a single sentence could, and a failure to actually state firm direct objections to blatantly stupid ideas.

[–] [email protected] 11 points 1 day ago* (last edited 1 day ago) (1 children)

My favorite comment in the lesswrong discussion: https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=oyDCbGtkvXtqMnNbK

It's not that eugenics is a magnet for white supremacists, or that rich people might give their children an even more artificially inflated sense of self-worth. No, the risk is that the superbabies might be Khan and kick start the eugenics wars. Of course, this isn't a reason not to make superbabies, it just means the idea needs some more workshopping via Red Teaming (hacker lingo is applicable to everything).

[–] [email protected] 10 points 1 day ago* (last edited 1 day ago)

Soyweiser has likely accurately identified that you're JAQing in bad faith, but on the slim off chance you actually want to educate yourself, the rationalwiki page on Biological Determinism and Eugenics is a decent place to start to see the standard flaws and fallacies used to argue for pro-eugenic positions. Rationalwiki has a scathing and sarcastic tone, but that tone is well deserved in this case.

To provide a brief summary, in general, the pro-eugenicists misunderstand correlation and causation, misunderstand the direction of causation, overestimate what little correlation there actually is, fail to understand environmental factors (especially systemic inequalities that might require leftist solutions to actually have any chance at fixing), and refuse to acknowledge the context of genetics research (i.e. all the Neo-Nazis and alt righters that will jump on anything they can get).

The lesswrongers and SSCs sometimes whine they don't get fair consideration, but considering they take Charles Murray the slightest bit seriously they can keep whining.

[–] [email protected] 13 points 1 week ago* (last edited 1 week ago) (9 children)

That was literally the inflection point on my path to sneerclub. I had started to break from less wrong before, but I hadn't reached the tipping point of saying it was all bs. And for ssc and Scott in particular I had managed to overlook the real message buried in thousands of words of equivocating and bad analogies and bad research in his earlier posts. But "you are still crying wolf" made me finally question what Scott's real intent was.

[–] [email protected] 2 points 1 week ago

I normally think gatekeeping fandoms and calling people fake fans is bad, but it is necessary and deserved in this case to assume Elon Musk is only a surface level fan grabbing names and icons without understanding them.

[–] [email protected] 3 points 1 week ago

This is a good summary of half of the motive to ignore the real AI safety stuff in favor of sci-fi fantasy doom scenarios. (The other half is that the sci-fi fantasy scenarios are a good source of hype.) I hadn't thought about the extent to which Altman's plan is "hey morons, hook my shit up to fucking everything and try to stumble across a use case that’s good for something" (as opposed to the "we’re building a genie, and when we’re done we’re going to ask it for three wishes" he hypes up), that makes more sense as a long term plan...

[–] [email protected] 4 points 6 months ago

It's not all the exact same! ~~Friendship is Optimal adds in pony sex~~

[–] [email protected] 4 points 6 months ago

There’s also a whole subreddit from hell about this subgenre of fiction: https://www.reddit.com/r/rational/

/r/rational isn't just for AI fiction, it also ~~claims~~ includes anything with decent verisimilitude, so stuff like The Hatchet and The Martian show up in its recommendation lists also! ~~letting it claim credit for better fiction than the AI stuff~~

[–] [email protected] 6 points 6 months ago* (last edited 6 months ago) (8 children)

Oh no, its much more than a single piece of fiction, it's like an entire mini genre. If you're curious...

A short story... where the humans are the AI! https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message Its meant to suggest what could be done with arbitrary computational power and time. Which is Eliezer's only way of evaluating AI, by comparing it to the fictional version with infinite compute inside of his head. Expanded into a longer story here: https://alicorn.elcenia.com/stories/starwink.shtml

Another parable by Eliezer (the genie is blatantly an AI): https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2 Fitting that his analogy for AI is a literal genie. This story also has some weird gender stuff, because why not!

One of the longer ones: https://www.fimfiction.net/story/62074/friendship-is-optimal A MLP MMORPG AI is engineered to be able to bootstrap to singularity. It manipulates everyone into uploading into it's take on My Little Pony! The author intended it as a singularity gone subtly wrong, but because they posted it to both a MLP fan-fiction site in addition to linking it to lesswrong, it got an audience that unironically liked the manipulative uploading scenario and prefers it to real life.

Gwern has taken a stab at it: https://gwern.net/fiction/clippy We made fun of Eliezer warning about watching the training loss function, in this story the AI literally hacks it way out in the middle of training!

And another short story: https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story

So yeah, it an entire genre at this point!

[–] [email protected] 5 points 6 months ago (11 children)

Short fiction of AGI takeover is a lesswrong tradition! And some longer fics too! Are you actually looking for specific examples and/or links? Lots of them are fun, in a sci-fi short form kind of way. The goofier ones and cringer ones are definitely sneerable.

[–] [email protected] 10 points 6 months ago

I mean, if you play on the doom to hype yourself, dealing with employees that take that seriously feel like a deserved outcome.

 

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

view more: next ›