[-] ebu@awful.systems 19 points 2 years ago* (last edited 2 years ago)

there were bits and pieces that made me feel like Jon Evans was being a tad too sympathetic to Elizer and others whose track record really should warrant a somewhat greater degree of scepticism than he shows, but i had to tap out at this paragraph from chapter 6:

Scott Alexander is a Bay Area psychiatrist and a writer capable of absolutely magnificent, incisive, soulwrenching work ... with whom I often strongly disagree. Some of his arguments are truly illuminatory; some betray the intellectual side-stepping of a very smart person engaged in rationalization and/or unwillingness to accept the rest of the world will not adopt their worldview. (Many of his critics, unfortunately, are inferior writers who misunderstand his work, and furthermore suggest it’s written in bad faith, which I think is wholly incorrect.) But in fairness 90+% of humanity engages in such rationalization without even worrying about it. Alexander does, and challenges his own beliefs more than most.

the fact that Jon praises Scott's half-baked, anecdote-riddled, Red/Blue/Gray trichotomy as "incisive" (for playing the hits to his audience), and his appraisal of the meandering transhumanist non-sequitur reading of Allen Ginsberg's Howl as "soulwrenching" really threw me for a loop.

and then the later description of that ultimately rather banal New York Times piece as "long and bad" (a hilariously hypocritical set of adjectives for a self-proclaimed fan of some of Scott's work to use), and the slamming of Elizabeth Sandifer as being a "inferior writer who misunderstands Scott's work", for uh, correctly analyzing Scott's tendencies to espouse and enable white supremacist and sexist rhetoric... yeah it pretty much tanks my ability to take what Jon is writing at face value.

i don't get how after so many words being gentle but firm about Elizer's (lack of) accomplishments does he put out such a full-throated defense of Scott Alexander (and the subsequent smearing of his """enemies"""). of all people, why him?

[-] ebu@awful.systems 15 points 2 years ago* (last edited 2 years ago)

Would you rather have a dozen back and forth interactions?

these aren't the only two possibilities. i've had some interactions where i got handed one ref sheet and a sentence description and the recipient was happy with the first sketch. i've had some where i got several pieces of references from different artists alongside paragraphs of descriptions, and there were still several dozen attempts. tossing in ai art just increases the volume, not the quality, of the interaction

Besides, this is something I've heard from other artists, so it's very much a matter opinion.

i have interacted with hundreds of artists, and i have yet to meet an artist that does not, to at least some degree, have some kind of negative opinion on ai art, except those for whom image-generation models were their primary (or more commonly, only) tool for making art. so if there is such a group of artists that would be happy to be presented with ai art and asked to "make it like this", i have yet to find them

Annoying, sure, but not immoral.

annoying me is immoral actually

[-] ebu@awful.systems 17 points 2 years ago

as someone who only draws as a hobbyist, but who has taken commissions before, i think it would be very annoying to have a prospective client go "okay so here's what i want you to draw" and then send over ai-generated stuff. if only because i know said client is setting their expectations for the hyper-processed, over-tuned look of the machine instead of what i actually draw

[-] ebu@awful.systems 18 points 2 years ago

i couldn't resist

Reddit post titled "The Anti-Al crowd is so toxic and ridiculous that it's actually pushed me FURTHER into Al art"

at least when this rhetoric popped up around crypto and GameStop stocks, there was a get-rich-quick scheme attached to it. these fuckers are doing it for free

[-] ebu@awful.systems 20 points 2 years ago* (last edited 2 years ago)

simply ask the word generator machine to generate better words, smh

this is actually the most laughable/annoying thing to me. it betrays such a comprehensive lack of understanding of what LLMs do and what "prompting" even is. you're not giving instructions to an agent, you are feeding a list of words to prefix to the output of a word predictor

in my personal experiments with offline models, using something like "below is a transcript of a chat log with XYZ" as a prompt instead of "You are XYZ" immediately gives much better results. not good results, but better

[-] ebu@awful.systems 20 points 2 years ago

it is a little entertaining to hear them do extended pontifications on what society would look like if we had pocket-size AGI, life-extension or immortality tech, total-immersion VR, actually-good brain-computer interfaces, mind uploading, etc. etc. and then turn around and pitch a fit when someone says "okay so imagine if there were a type of person that wasn't a guy or a girl"

[-] ebu@awful.systems 15 points 2 years ago

typically one prefers their questions be answered correctly. but hey, you are free to be wrong faster now

[-] ebu@awful.systems 20 points 2 years ago* (last edited 2 years ago)

i really, really don't get how so many people are making the leaps from "neural nets are effective at text prediction" to "the machine learns like a human does" to "we're going to be intellectually outclassed by Microsoft Clippy in ten years".

like it's multiple modes of failing to even understand the question happening at once. i'm no philosopher; i have no coherent definition of "intelligence", but it's also pretty obvious that all LLM's are doing is statistical extrapolation on language. i'm just baffled at how many so-called enthusiasts and skeptics alike just... completely fail at the first step of asking "so what exactly is the program doing?"

[-] ebu@awful.systems 16 points 2 years ago* (last edited 2 years ago)

i cant stop scrolling through this hot garbage, it just keeps getting better

cut-off tweet from the same account saying that Als are now capable of hypnotizing humans

[-] ebu@awful.systems 17 points 2 years ago* (last edited 2 years ago)

i'll take trolls "pretending" to not understand computational time over fascists "pretending" to gush over other fascists any day

[-] ebu@awful.systems 18 points 2 years ago

it's funny how your first choice of insult is accusing me of not being deep enough into llm garbage. like, uh, yeah, why would i be

but also how dare you -- i'll have you know i only choose the most finely-tuned, artisinally-crafted models for my lawyering and/or furry erotic roleplaying needs

[-] ebu@awful.systems 19 points 2 years ago* (last edited 2 years ago)

as previously discussed, the rabbit r1 turns out to be (gasp) just an android app.

in a twist no one saw coming, the servers running "rabbit os" report to just be running Ubuntu, and the "large action model" that was supposed to be able to watch humans use interfaces and learn how to use them, turns out to just be a series of hardcoded places to click in Playwright.

view more: ‹ prev next ›

ebu

0 post score
0 comment score
joined 2 years ago