Arsenic-Shahrazad-12B-v4 GGUF: 12B merge trained on 2024–2026 data, no safety filtering
mradermacher released quantized GGUF weights for Arsenic-Shahrazad-12B-v4, a 12-billion-parameter merge trained on post-2024 datasets and tagged not-for-all-audiences.
Arsenic-Shahrazad-12B-v4 is a 12-billion-parameter language model merge that arrived on HuggingFace on May 16 in GGUF quantizations. The model combines instruction-tuning and conversational data from lambent's post-cutoff-2024-2026-sft and post-cutoff-2024-2026-bundles datasets, covering events and terminology from 2024 through early 2026—a window that captures contemporary slang, product names, and geopolitical context absent from pre-2024 training runs. The not-for-all-audiences tag indicates the weights carry no built-in safety filtering, making them suitable for local deployment where content restrictions are undesirable.
GGUF quantizations allow the model to run on consumer hardware with reduced memory footprints, a standard distribution choice for community merges targeting CPU and lower-VRAM GPU setups. The format has become the de facto standard for local inference toolchains like llama.cpp, Ollama, and LM Studio, which prioritize running models on machines without dedicated AI accelerators. By shipping quantized weights directly, mradermacher eliminates the multi-hour conversion step that would otherwise be required to run the model outside of PyTorch. The repository credits mergekit as the merge toolchain—the standard open-source utility for blending multiple fine-tuned checkpoints into a single model without retraining from scratch.
