Qwen3-30B-A3B-NSFW-JP: uncensored Japanese fine-tune for local inference
Zrgei released an uncensored Japanese fine-tune of Alibaba's Qwen3-30B-A3B base model on HuggingFace, targeting NSFW conversational use cases with no safety filtering.
Zrgei released Qwen3-30B-A3B-NSFW-JP on HuggingFace on May 15, an uncensored Japanese fine-tune of Alibaba's Qwen3-30B-A3B base model. The checkpoint is tagged for conversational text generation and explicitly targets NSFW use cases with no safety filtering. It ships in safetensors format and runs on the transformers pipeline.
The base model, Qwen3-30B-A3B, is a 30-billion-parameter mixture-of-experts architecture from Alibaba's Qwen team. Zrgei's fine-tune adapts the weights for Japanese-language prompts and removes the safety tuning present in the original release. The model card marks the checkpoint as not-for-all-audiences, signaling unrestricted output.
Local deployment
The weights are freely downloadable for local inference. The base Qwen3-30B-A3B typically ships under Alibaba's Tongyi Qianwen license, which permits research and commercial use with attribution. Practitioners running the fine-tune locally will need hardware capable of handling 30B-parameter MoE inference — typically a 48GB or dual-24GB GPU setup for full-precision weights, or quantized variants for consumer cards.
No benchmark scores or sample outputs appear in the model card. Japanese-language NSFW fine-tunes remain a niche segment of the open-weight ecosystem, with most uncensored conversational models targeting English or Chinese prompts.
