Gemma4-26B uncensored multimodal model debuts on HuggingFace
HauhauCS released Gemma4-26B-A4B-Uncensored-HauhauCS-Balanced, an uncensored multimodal checkpoint supporting image-text-to-text workflows with vision and coding capabilities.
HauhauCS released Gemma4-26B-A4B-Uncensored-HauhauCS-Balanced, an uncensored multimodal checkpoint on HuggingFace. The 26-billion-parameter model runs image-text-to-text pipelines with vision, coding, and agentic capabilities. GGUF quantization support makes it accessible for local inference on consumer hardware without server-side safety enforcement.
The checkpoint builds on Google's Gemma4 architecture using a mixture-of-experts design, where only a subset of parameters activate per token. HauhauCS labeled it "Balanced," suggesting a trade-off between capability and resource footprint. The uncensored tag indicates no safety tuning or content filtering, positioning it alongside other open-weight models practitioners can run locally. The model supports agentic workflows—where the model chains reasoning steps or tool calls—alongside text and vision generation, with coding-focused fine-tuning evident in its tags. As of May 14, the checkpoint had 15 likes and zero recorded downloads.
