What changed in the latest unsloth (2025.2.15) vs 2025.1.5 version that can cause training failure?

I observed that, with the latest 2025.2.15 version, the loss has been oscillating more and dropping less, with exactly the same QLoRA hyperparameters and datasets as were used with 2025.1.5 version. Worse, after the training run, the final loss is around ~0.8, vs ~0.2 before, and my calculation for perplexity would get NaN, instead of ~1.2. Worst thing is that when trying to evaluate the model output, I got all data errors no output. This happened with using base model "unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit". I am now attempting another run with "unsloth/Llama-3.1-Storm-8B-bnb-4bit". So far I am seeing the same pattern: loss is jumping back and forth and dropping rate is much less. I am wondering what changes in unsloth library would have caused these differences? Has anybody encountered the same problems? How can I get back to the 2015.1.5 version of unsloth? Thanks in advance!