Qwen3-VL-32B abliterated weights arrive on HuggingFace
An abliterated 32-billion-parameter Qwen3-VL multimodal model drops on HuggingFace with safety filters removed, targeting uncensored image-text workflows.
Qwen3-VL-32B-Instruct-uncensored-heretic, a 32-billion-parameter multimodal model from gsting, strips safety tuning from Alibaba's Qwen3-VL base. The weights landed on HuggingFace on May 11 as a safetensors checkpoint tagged "abliterated," "decensored," and "heretic"—labels that signal removal of the alignment layers that block NSFW or controversial prompts in the original instruction-tuned release. Open-weight multimodal models like Qwen3-VL can process both images and text, making abliterated variants particularly useful for practitioners building vision-language pipelines without content restrictions.
At 32 billion parameters, the model sits in the sweet spot for practitioners with consumer-grade multi-GPU rigs or cloud instances—large enough for nuanced multimodal reasoning, small enough to run without enterprise infrastructure. The checkpoint is distributed in transformers library format, meaning it runs in standard inference stacks like vLLM or HuggingFace's own text-generation-inference. Qwen3-VL's architecture supports long-context reasoning and dense visual grounding, capabilities that carry over to the uncensored fork.
Abliteration has become a common technique in the open-weight community for removing safety guardrails from instruction-tuned models. The process typically involves identifying and pruning the specific weight directions learned during alignment training that cause the model to refuse certain prompts. Unlike fine-tuning, which requires additional training runs, abliteration modifies the existing checkpoint directly, preserving the base model's capabilities while eliminating refusal behavior. The approach has been applied to text-only models like Llama and Mistral for over a year, and multimodal abliterations like this Qwen3-VL fork extend the same methodology to vision-language architectures.
