It's a Large Language Model designed to generate natural-sounding language based on statistical probabilities and patterns - not knowledge or understanding. It doesn't "lie" and it doesn't have the capability to explain itself. It just talks.
That speech being coherent is by design; the accuracy of the content is not.
This isn't the model failing. It's just being used for something it was never intended for.