this post was submitted on 05 Jul 2024
105 points (100.0% liked)

TechTakes

1489 readers
33 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 5 months ago (2 children)

Hah, still worked for me. I enjoy the peek at how they structure the original prompt. Wonder if there's a way to define a personality.

[–] [email protected] 12 points 5 months ago

Wonder if there’s a way to define a personality.

Considering how Altman is, I don't think they've cracked that problem yet.

[–] [email protected] 5 points 5 months ago

Not with this framing. By adopting the first- and second-person pronouns immediately, the simulation is collapsed into a simple Turing-test scenario, and the computer's only personality objective (in terms of what was optimized during RLHF) is to excel at that Turing test. The given personalities are all roles performed by a single underlying actor.

As the saying goes, the best evidence for the shape-rotator/wordcel dichotomy is that techbros are terrible at words.

NSFWThe way to fix this is to embed the entire conversation into the simulation with third-person framing, as if it were a story, log, or transcript. This means that a personality would be simulated not by an actor in a Turing test, but directly by the token-predictor. In terms of narrative, it means strictly defining and enforcing a fourth wall. We can see elements of this in fine-tuning of many GPTs for RAG or conversation, but such fine-tuning only defines formatted acting rather than personality simulation.