The planet burning text summarisers seem to have found their main purpose as confirmation bias machines and I am finding myself arguing with the output of an LLM when talking to people.
Now after many years in the world of office jobs my general perception of most people is
- severe inability to be wrong
- severe inability to do anything about being wrong
- egos so weak they could shatter just by looking at them
- huffing your own farts and doubling down is the name of the game
And so I have to contend with this. People who cannot accept they have made a mistake arguing with me via an llm output they managed to wrestle into agreeing with them because they can’t accept fault. The amount of time i have given learned educated provably correct advice only to hear “but copilot told me this” and the output being some hallucinated drivel cos the person who wrote the prompt is more bothered by trying to be correct than resolving the original issue they were having.
I know a small handful of people who have gone completely off the rails already getting chatgpt to confirm basically any delusion they have and then when i see them will go on about it for hours on end how they broke out of the matrix and see the world for what it is and that we don’t need schools anymore just give everyone an llm.
All it reminds me of is the “do your own research” crowd of mumsnet nazis parroting some random facebook post on how being vegan gives you autism and the only cure is shooting your child in the head. Except its worse because the llm can keep that delusion going for longer and build on it until most people are living in some ai generated dreamland of pure unfiltered confirmation bias.
I think we’re going to hit some major problems not long from now as a significant portion of people start offloading their thinking to these corporate models. I already see a decent chunk just accepting it as an unbiased authority and its scary. A completely new and arguably more effective way to deliver even more extreme propaganda if they chose to do so and very few people even question it.
Oh and the number of people unknowingly sharing sora videos is…. Dire
To add to this I never really understood why a lot of people have a hard time being wrong and have such diabolically weak egos. Like it’s logically infeasible to not be wrong and if you are wrong just idk learn from it? I like being wrong cos it means i can fix it and not be wrong later. The only reasonable response to being incorrect is “oops” followed by “thanks”
Aint nothing quite like paying a consultant hundreds of thousands to tell you you’re right and all your employees are wrong though. LLM doesnt have the same slicked back or gelled up hair appeal of a good ol fashioned consultant.
But it does come with the potential upside of still costing hundreds of thousands of dollars.