Qwopus3.5-122B abliterated weights strip safety from Qwen 3.5 multimodal model
OpenYourMind released abliterated weights for a 122B-parameter mixture-of-experts multimodal model based on Qwen 3.5, removing safety guardrails from the image-text-to-text pipeline.
OpenYourMind released abliterated weights for Qwopus3.5-122B-A10B on HuggingFace—a 122-billion-parameter mixture-of-experts model that strips safety tuning from Qwen 3.5's multimodal architecture. The weights are available in safetensors format, supporting image-text-to-text workflows without content filtering. The abliteration process removes the alignment layers that enforce refusal behavior, leaving the base reasoning and vision capabilities intact.
The model uses Qwen 3.5's MoE architecture with 10 billion active parameters per forward pass, routing across the full 122B parameter space. That active-parameter count keeps inference costs manageable on consumer hardware while preserving the knowledge density of the larger parameter pool. The image-text-to-text pipeline accepts both text and vision inputs, handling tasks from captioning to visual question answering to multimodal chat.
Expert routing and vision grounding
Qwen 3.5 MoE models split computation across expert sub-networks, activating only the most relevant experts for each token. The 122B total parameter count includes all experts; the 10B active figure reflects what actually runs during generation. OpenYourMind's abliteration targets the post-training safety layers without retraining the expert routing or the vision encoder, so the model retains Qwen's original multimodal grounding. The safetensors format means the weights load directly into Transformers-compatible runtimes without conversion.
Practitioners running local multimodal workflows now have an uncensored alternative to safety-tuned vision-language models in the 100B+ class.
