R1-1776 Dynamic GGUFs by Unsloth

Hey guys, we uploaded 2bit to 16bit GGUFs for R1-1776, Perplexity's new DeepSeek-R1 finetune that removes all censorship while maintaining reasoning capabilities: https://huggingface.co/unsloth/r1-1776-GGUF

We also upload Dynamic 2-bit, 3 and 4-bit versions and standard 3, 4, etc bit versions. The Dynamic 4-bit is even smaller than the medium one and achieves even higher accuracy. 1.58-bit and 1-bit will have to be done later as it relies on imatrix quants, which take more time.

Instructions to run the model are in the model card we provided. Do not forget about <|User|> and <|Assistant|> tokens! - Or use a chat template formatter. Also do not forget about <think>\n! Prompt format: "<|User|>Create a Flappy Bird game in Python.<|Assistant|><think>\n"

You can also refer to our previous blog for 1.58-bit R1 GGUF for hints and results: https://unsloth.ai/blog/r1-reasoning

MoE Bits Type Disk Size HF Link
2-bit Dynamic UD-Q2_K_XL 211GB Link
3-bit Dynamic UD-Q3_K_XL 298.8GB Link
4-bit Dynamic UD-Q4_K_XL 377.1GB Link
2-bit extra small Q2_K_XS 206.1GB Link
4-bit Q4_K_M 405GB Link

And you can find the rest like 6-bit, 8-bit etc on the model card. Happy running!

P.S. we have a new update coming very soon which you guys will absolutely love! :)