Tutorial: How to Install Unsloth on Windows
Unsloth now works for Windows!
Tutorial: How to Train your own Reasoning model using Llama 3.1 (8B) + Unsloth + GRPO
You can now train your own Reasoning model with just 5GB VRAM!
What changed in the latest unsloth (2025.2.15) vs 2025.1.5 version that can cause training failure?
Phi-4-mini Bug Fixes + GGUFs
Phi-4-mini + Bug Fixes Details
RAG vs Fine-Tuning: A Developer’s Guide to Enhancing AI Performance
Will Training Qwen2.5-Coder-7B on FineTome-100k Exceed Colab's Runtime Limit?
You can now train your own Reasoning model like DeepSeek-R1 locally! (7GB VRAM min.)
You can now train your own Reasoning model using GRPO (5GB VRAM min.)
Is Unsloth GRPO compatible with new Phi-4 multimodals?
Any good use cases for low (0.5B - 3B) 4bit models?
Comparing Unsloth R1 dynamic quants relative performance: IQ2_XXS (183GB) beats Q2_K_XL (212GB)
[P] Train your own Reasoning model - GRPO works on just 5GB VRAM
You can now train your own o3-mini model on your local device!
Tutorial: Train your own Reasoning model using Llama 3.1 (8B) + GRPO on Google Colab
You can now train your own Reasoning model with just 5GB VRAM
Llama AttributeError: 'bool' object has no attribute 'all_special_tokens'
You can now train your own Reasoning model locally with just 5GB VRAM!
10x longer context + 90% less VRAM - GRPO now in Unsloth!
Perplexity R1 Llama 70B Uncensored GGUFs & Dynamic 4bit quant
Train your own Reasoning model like DeepSeek-R1 locally (5GB VRAM min.)