EnceladusHyperStock-24B: Mistral merge debuts on HuggingFace with unrestricted weights
ShyliaSafetensors released a 24-billion-parameter Mistral merge built with Mergekit, flagged as unrestricted and available on HuggingFace.
EnceladusHyperStock-24B, a 24-billion-parameter text-generation model from ShyliaSafetensors, landed on HuggingFace on May 11, 2026. Built with Mergekit and tagged as a "model-stock" merge, the checkpoint combines multiple Mistral-architecture weights into a single safetensors artifact. The model card flags it as not-for-all-audiences, indicating the weights carry no safety filtering—a capability that matters for practitioners running local inference without API guardrails.
The 24B parameter count sits between Mistral 7B and larger 70B+ open-weight models, a middle ground that runs on consumer GPUs while retaining enough capacity for instruction-following and reasoning tasks. Mergekit-based checkpoints typically blend instruction-tuned or domain-specific fine-tunes to inherit strengths from multiple parent models, though the exact recipe remains unpublished here.
At launch, the repository shows zero downloads and zero likes—a fresh upload with no public traction yet. The model card omits benchmark numbers, training details, and the merge configuration itself, leaving users to test the checkpoint directly. Without published eval data or parent-model transparency, it's unclear whether EnceladusHyperStock delivers on its naming or simply repackages existing Mistral merges. The next signal to watch is whether ShyliaSafetensors updates the card with merge weights, parent models, and at least one benchmark pass—MMLU, HellaSwag, or perplexity—before practitioners commit VRAM to a 24B download.
