Yeah, that's true. From my experience of the consumer versions of Gemini via the app, it's infuriating how willing it is to tell you it's wrong when you shout at it.
It's usually initially fully confident in an answer, but then you question it even slightly and it caves, flips 180°, and says it was wrong. LLMs are useless for certain tasks.
"The world only exists in the way I experience it, and everyone else is a 'weird kid'."
Perhaps you should realise that it's not all about people trying to be weird, sometimes it's just ordinary life. Nothing needs fixing, no one needs to get better.
Maybe your comment was bait, but still needs saying.