Qwen3-Space.Agent_DASD-Uncensored-4B: compact 4B merge for creative writing
A 4-billion-parameter uncensored Qwen3 merge tuned for creative writing and long chain-of-thought reasoning landed on HuggingFace this week.
GODsStrongestSoldier released Qwen3-Space.Agent_DASD-Uncensored-4B, a 4-billion-parameter uncensored merge combining Qwen3 base weights via spherical linear interpolation (SLERP). The checkpoint ships in safetensors format and targets creative writing, reasoning, and long chain-of-thought tasks—a compact alternative for practitioners running unrestricted local inference on consumer-grade hardware.
Qwen3 is Alibaba's latest open-weight language model family, with variants ranging from 1.5B to 72B parameters. The base models ship with safety tuning, making uncensored derivatives a recurring interest among practitioners who need unrestricted local generation for creative fiction, role-play scenarios, or research into model behavior without guardrails. At 4 billion parameters, this merge sits in the sweet spot for a single mid-range GPU—small enough to fit in memory, large enough to handle multi-turn dialogue and extended reasoning chains.
SLERP preserves geometric structure in parameter space while blending weights. The "Space.Agent" and "DASD" components in the model name suggest a lineage of prior merges or fine-tunes, though the card does not spell out the ancestry. Community mergers often build on earlier work in a chain, with each iteration targeting a specific capability gap. The creative-writing and long-CoT tags imply the merge was optimized for tasks requiring sustained narrative coherence or step-by-step problem decomposition—both areas where smaller models historically struggle. The release went live May 13, 2025; no benchmark scores or sample outputs are included in the model card at publication time.
