this post was submitted on 27 Feb 2024
50 points (100.0% liked)

TechTakes

1401 readers
209 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

just think: an AI trained on depressed social justice queers

wonder what Hive is making of Bluesky

"you took a perfectly good pocket calculator and gave it anxiety"

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 8 months ago* (last edited 8 months ago) (2 children)

If AI is trained on some subset of human interactions and subjects, lets call it set A, and someone uses the AI to learn about a subset of human interactions and subjects, lets call it set B, then there necessarily must be some shared set C of subjects and interactions contained within both A and B. In some cases information may literally be mirrored or it may simply be memes or ideas that pop up over and over again. Note, I am talking about perspectives on things, meta data if you will about those things contained within the sets A and B, just as much as I am talking about the specific things themselves in A and B.

The relative size of set C can be thought of as a practical measure of the magnitude difference between pattern matching and knowledge in a given context. AI design seems to treat set C as always trivial in size compared to set A or set B, and does not seem concerned with the possible cross-talk effects that arise from set A and set B not truly being linearly independent. Even worse, the cross-talk that happens creates an invisible distortion that degrades the usefulness of the AI, but that cannot be fundamentally distinguished (through inspection of the AI alone and not the data sets) from the correctly functioning aspects of the AI.

The larger set C becomes, the exponentially quicker the collective wisdom of human conversations online is strip mined and obscured behind machine generated fluff.

Everyone wants to talk about AI from the angle of the genius computer programmer making an intelligent machine because that is sexy, but what these "AI" really represent are expressions of the power of good data sets and the priceless value of human beings who methodically contribute high quality content to those data sets. In other words, AI and LLMs are about humans intelligently structuring set A and set B so that set C isn't a problem. AI is not some magic thing that only needs humans to be trained on quality data sets to get started, rather AI is an expression of how powerful our collective conversations and creations are when we create structures out of them that computers can interface with.

Sillicon Valley and the 1% are trying to convince us that the collective power of the crowd is actually something they own by slapping "AI" on it and calling it a day but even with the basic argument I made above (excluding the fact that AI just also hallucinates shit) there is just no way that AI in its current form along with simultaneous divestment and devaluing of the systems that created the high quality training data sets in the first place is long term sustainable. The sensemaking of AI HAS to collapse in on itself along any meaningful metric if things keep heading in this direction and the whiplash that is going to cause will hurt a lot of humans.

picture of ouroboros, the serpent that is eating itself

[–] [email protected] 12 points 8 months ago

definitely one of the longest ways to say "they're thieving little shits trying to sell things back to us after slapping a lick of paint on it" I've seen in a long while, unfortunately there's no achievement badge for that tho

[–] [email protected] 8 points 8 months ago

this reads like taking the promises made by many different technologies I've read weird promises from and then extrapolating from the promises as if they were a description of reality

(academic blockchain theorising has often been of this genre for example)