Phi-4-mini Bug Fixes + GGUFs

Hey guys! llama.cpp added supported for Phi-4 mini today - we also found and fixed 4 tokenization related problems in Phi-4 mini!

The biggest problem with the chat template is the EOS token was set to <|endoftext|>, but it should be <|end|>!

GGUFs are at: https://huggingface.co/unsloth/Phi-4-mini-instruct-GGUF

The rest of the versions including 16-bit are also on Hugging Face.

And the dynamic 4bit bitsandbytes version is at https://huggingface.co/unsloth/Phi-4-mini-instruct-unsloth-bnb-4bit

There were also tokenization problems for the larger Phi-4 14B as well, which we fixed a while back for those who missed it and Microsoft used our fixes 2 weeks ago.

Thank you! :)