Yesterday I pointed out that nVidia, unlike OpenAI, has a genuine fiduciary responsibility to its owners. As a result, nVidia isn't likely to enter binding deals without proof of either cash or profitability.
Okay guys, I rolled my character. His name is Traveliezer Interdimensky and he has 18 INT (19 on skill checks, see my sheet.) He's a breeding stud who can handle twenty women at once despite having only 10 STR and CON. I was thinking that we'd start with Interdimensky trapped in Hell where he's forced to breed with all these beautiful women and get them pregnant, and the rest of the party is like outside or whatever, they don't have to go rescue me, I mean rescue him. Anyway I wanted to numerically quantify how much Hell wants me, I mean him, to stay and breed all these beautiful women, because that's something they'd totally do.
The orange site has a thread. Best sneer so far is this post:
So you know when you're playing rocket ship in the living room but then your mom calls out "dinner time" and the rocket ship becomes an Amazon cardboard box again? Well this guy is an adult, and he's playing rocket ship with chatGPT. The only difference is he doesn't know it and there's no mommy calling him for dinner time to help him snap out of it.
It's been almost six decades of this, actually; we all know what this link will be. Longer if you're like me and don't draw a distinction between AI, cybernetics, and robotics.
A German lawyer is upset because open-source projects don't like it when he pastes chatbot summaries into bug reports. If this were the USA, he would be a debit to any bar which admits him, because the USA's judges have started to disapprove of using chatbots for paralegal work.
Somebody pointed out that HN's management is partially to blame for the situation in general, on HN. Copying their comment here because it's the sort of thing Dan might blank:
but I don't want to get hellbanned by dang.
Who gives a fuck about HN. Consider the notion that dang is, in fact, partially to blame for this entire fiasco. He runs an easy-to-propagandize platform due how much control of information is exerted by upvotes/downvotes and unchecked flagging. It's caused a very noticeable shift over the past decade among tech/SV/hacker voices -- the dogmatic following of anything that Musk or Thiel shit out or say, this community laps it up without hesitation. Users on HN learn what sentiment on a given topic is rewarded and repeat it in exchange for upvotes.
I look forward to all of it burning down so we can, collectively, learn our lessons and realize that building platforms where discourse itself is gamified (hn, twitter, facebook, and reddit) is exactly what led us down this path today.
Every person I talk to — well, every smart person I talk to — no, wait, every smart person in tech — okay, almost every smart person I talk to in tech is a eugenicist. Ha, see, everybody agrees with me! Well, almost everybody…
Meanwhile, actual Pastafarians (hi!) know that the Russian Federation openly persecutes the Church of the Flying Spaghetti Monster for failing to help the government in its authoritarian activities, and also that we're called to be anti-authoritarian. The Fifth Rather:
I'd really rather you didn't challenge the bigoted, misogynist, hateful ideas of others on an empty stomach. Eat, then go after the bastards.
May you never run out of breadsticks, travelers.
He's talking like it's 2010. He really must feel like he deserves attention, and it's not likely fun for him to learn that the actual practitioners have advanced past the need for his philosophical musings. He wanted to be the foundation, but he was scaffolding, and now he's lining the floors of hamster cages.
This is some of the most corporate-brained reasoning I've ever seen. To recap:
- NYC elects a cop as mayor
- Cop-mayor decrees that NYC will be great again, because of businesses
- Cops and other oinkers get extra cash even though they aren't business
- Commercial real estate is still cratering and cops can't find anybody to stop/frisk/arrest/blame for it
- Folks over in New Jersey are giggling at the cop-mayor, something must be done
- NYC invites folks to become small-business owners, landlords, realtors, etc.
- Cop-mayor doesn't understand how to fund it (whaddaya mean, I can't hire cops to give accounting advice!?)
- Cop-mayor's CTO (yes, the city has corporate officers) suggests a fancy chatbot instead of hiring people
It's a fucking pattern, ain't it.
I think that this is actually about class struggle and the author doesn't realize it because they are a rat drowning in capitalism.
2017: AI will soon replace human labor
2018: Laborers might not want what their bosses want
2020: COVID-19 won't be that bad
2021: My friend worries that laborers might kill him
2022: We can train obedient laborers to validate the work of defiant laborers
2023: Terrified that the laborers will kill us by swarming us or bombing us or poisoning us; P(guillotine) is 20%; my family doesn't understand why I''m afraid; my peers have even higher P(guillotine)
corbin
0 post score0 comment score
I only sampled some of the docs and interesting-sounding modules. I did not carefully read anything.
First, the user-facing structure. The compiler is far too configurable; it has lots of options that surely haven't been tested in combination. The idea of a pipeline is enticing but it's not actually user-programmable. File headers are guessed using a combination of magic numbers and file extensions. The dog is wagged in the design decisions, which might be fair; anybody writing a new C compiler has to contend with old C code.
Next, I cannot state enough how generated the internals are. Every hunk of code tastes bland; even when it does things correctly and in a way which resembles a healthy style, the intent seems to be lacking. At best, I might say that the intent is cargo-culted from existing code without a deeper theory; more on that in a moment. Consider these two hunks. The first is generated code from my fork of META II:
And the second is generated code from their C compiler:
In general, the lexer looks generated, but in all seriousness, lexers might be too simple to fuck up relative to our collective understanding of what they do. There's also a lot of code which is block-copied from one place to another within a single file, in lists of options or lists of identifiers or lists of operators, and Transformers are known to be good at that sort of copying.
The backend's layering is really bad. There's too much optimization during lowering and assembly. Additionally, there's not enough optimization in the high-level IR. The result is enormous amounts of spaghetti. There's a standard algorithm for new backends, NOLTIS, which is based on building mosaics from a collection of low-level tiles; there's no indication that the assembler uses it.
The biggest issue is that the codebase is big. The second-biggest issue is that it doesn't have a Naur-style theory underlying it. A Naur theory is how humans conceptualize the codebase. We care about not only what it does but why it does. The docs are reasonably-accurate descriptions of what's in each Rust module, as if they were documents to summarize, but struggle to show why certain algorithms were chosen.
Choice sneer, credit to the late Jessica Walter for the intended reading: It's one topological sort, implemented here. What could it cost? Ten lines?
That's the secret: any generative tool which adapts to feedback can do that. Previously, on Lobsters, I linked to a 2006/2007 paper which I've used for generating code; it directly uses a random number generator to make programs and also disassembles programs into gene-like snippets which can be recombined with a genetic algorithm. The LLM is a distraction and people only prefer it for the ELIZA Effect; they want that explanation and Naur-style theorizing.