I thought I was imagining things, but since others seem to be doing better, I guess that the update really improved the model then! That's awesome
From my side, at least two things have improved: the English no longer decays into caveman speak, and the head-start is infinitely easier with minimal directions to the model. Also, some contradicting descriptions tend to work better. This all is actually a great improvement, but I'd be lying if I'd say that on my side I tested them thoroughly.
Something I tried as a quick test was to check how the model reacts with long logs and... yep, it still get stuck and running in circles due to weave patterns that repeat ad nauseam. It may be me having bad samples, but problems are still lingering past the 200kB, heavy past the 500kB mark, and unbearable on the 1Mb mark. By this I just mean having to deal with unsticking the LLM by editing heavily, not that it is impossible to continue. If someone has a long log that is fluid, please share what conditions allow for it.
But yeah, Basti0n is right! There was indeed a notorious improvement even if we are not there yet. Maybe there is future for DeepSeek after all!