This is like chaff, and I think it would work. But you would have to deal with the fact that whatever patterns it was showing you were doing "you would be doing".
I think there are other ways that AI can be used for privacy.
For example, did you know that you can be identified by how you type/speak online? what if you filtered everything you said through an LLM first, normalizing it. Takes away a fingerprinting option. Could use a pretty small local LLM model that could run on a modest local desktop...