Gemma-4 E4B abliterated checkpoint lands on HuggingFace for ComfyUI
A quantized Apache-2.0 checkpoint of the Gemma-4 E4B ultra-uncensored heretic model landed on HuggingFace this week, packaged for ComfyUI single-file workflows.
Bedovyy released a quantized checkpoint of llmfan46's Gemma-4 E4B ultra-uncensored heretic model on HuggingFace under Apache-2.0 on May 12, 2026. The model is tagged for ComfyUI single-file workflows, making it a drop-in option for practitioners running local text generation nodes. The base model is an instruction-tuned variant of Google's Gemma-4 architecture with safety layers removed—"heretic" in the name signals the abliteration. The quantized weights reduce memory footprint for consumer hardware without requiring a full GGUF conversion.
Quantized checkpoints have become standard in the open-weight community, bridging full-precision releases—often too large for consumer GPUs—and heavily compressed GGUF files that require specialized runtimes. A ComfyUI-tagged single-file checkpoint slots into existing workflows without rewriting node logic, which matters when practitioners chain text generation with image synthesis or multimodal pipelines. Apache-2.0 licensing permits commercial use and redistribution. The model showed zero downloads at publication.
