Newcomb's problem is a thought experiment where you're presented with two boxes, and the option to take one or both. One box is transparent and always contains $1000. The second is a mystery box.
Before making the choice, a supercomputer (or team of psychologists, etc) predicted whether you would take one box or both. If it predicted you would take both, the mystery box is empty. If it predicted you'd take just the mystery box, then it contains $1,000,000. The predictor rarely makes mistakes.
This problem tends to split people 50-50 with each side thinking the answer is obvious.
An argument for two-boxing is that, once the prediction has been made, your choice no longer influences the outcome. The mystery box already has whatever it has, so there's no reason to leave the $1000 sitting there.
An argument for one-boxing is that, statistically, one-boxers tend to walk away with more money than two-boxers. It's unlikely that the computer guessed wrong, so rather than hoping that you can be the rare case where it did, you should assume that whatever you choose is what it predicted.
Some version of it could exist. Not with the big numbers and not with the high degree of certainty in the problem, but you could have, say, somebody who's on average 70% accurate at reading people and the boxes are $1 and $10.
It is somewhat idealist in that it's a contrived scenario, but it's really just idle curiosity on my part. Maybe it could reflect something about people's thought processes, or maybe it's just people interpreting the question differently.
Even if it were to exist in the short run, it wouldn’t be stable. The predictor must be predicting somehow, which eventually could be at least partially sussed out, and future decisions would change as a result. Unless the predictor runs on literal magic, it would eventually no longer fit its own definition.