The way to look at models like R1 is as layers on top of the LLM architecture. We've basically hit a limit of what generative models can do on their own, and now research is branching out in new directions to supplement what the GPT architecture is good at doing.
The potential here is that these kinds of systems will be able to do tasks that fundamentally could not be automated previously. Given that, I think it's odd to say that the utility is not commensurate with the effort being invested into pursuing this goal. Making this work would effectively be a new industrial revolution. The reality is that we don't actually know what's possible, but the rate of progress so far has been absolutely stunning.
You can tell it was written by a lib.