DeepSeek R1 Distill 1.5B abliterated weights arrive on HuggingFace
Erokafella released an uncensored fine-tune of DeepSeek's 1.5B reasoning model, stripping safety filters while preserving MIT licensing and chain-of-thought inference.
DeepSeek-R1-Distill-Qwen-1.5B-uncensored, released this week on HuggingFace by creator Erokafella, is an abliterated fine-tune of DeepSeek's 1.5B parameter distilled reasoning model. The variant strips safety guardrails from the base checkpoint while retaining its reasoning capabilities and MIT license.
The base model is one of DeepSeek's smaller distillations of their R1 reasoning architecture, designed to run on consumer hardware while preserving chain-of-thought inference patterns. Abliteration—the process of removing alignment layers without full retraining—has become a common path for practitioners who need unrestricted local inference. At 1.5 billion parameters, the model fits in 3–4 GB of VRAM in fp16, making it accessible on mid-range GPUs or Apple Silicon Macs with unified memory.
Format and licensing
Weights are distributed in safetensors format, the standard for PyTorch checkpoints on HuggingFace. The model card lists the base as deepseek-ai/deepseek-r1-distill-qwen-1.5b, confirming it's a direct fine-tune rather than a merge or quantization. The uncensored variant retains the original MIT license, permitting commercial use without additional restrictions.
