Perplexity R1 Llama 70B Uncensored GGUFs & Dynamic 4bit quant

Perplexity I think quietly released uncensored versions of DeepSeek R1 Llama 70B Distilled versions - I actually totally missed this - did anyone see an announcement or know about this?

I uploaded 2bit all the way until 16bit GGUFs for the model: https://huggingface.co/unsloth/r1-1776-distill-llama-70b-GGUF

Also uploaded dynamic 4bit quants for finetuning and vLLM serving: https://huggingface.co/unsloth/r1-1776-distill-llama-70b-unsloth-bnb-4bit

A few days ago I uploaded dynamic 2bit, 3bit and 4bit quants for the full R1 Uncensored 671B MoE version, which dramatically increase accuracy by not quantizing certain modules. This is similar to the 1.58bit quant of DeepSeek R1 we did! https://huggingface.co/unsloth/r1-1776-GGUF