Anima model struggles to merge custom LoRAs in multi-character scenes
Users report Anima's photorealism breaks down when combining multiple character LoRAs in a single image, forcing a choice between characters despite native multi-character support.
Anima, the photorealistic fine-tune for Stable Diffusion, hits a wall when users try to combine multiple custom character LoRAs in a single scene. The model handles native characters cleanly in multi-subject compositions, but adding two or more trained LoRAs produces incoherent outputs—typically rendering only one character while dropping or distorting the other.
The issue surfaced in ComfyUI workflows this week, where practitioners testing character crossover scenes found the model defaulting to whichever LoRA weights dominate the prompt. A user working with character LoRAs that Anima doesn't natively recognize described the results as "a mess"—the model renders either one character or the other, but not both in a coherent image. Attempts to balance LoRA strength or adjust regional conditioning haven't yielded consistent results.
The breakdown appears tied to how Anima's base weights encode character identity. Native subjects baked into the checkpoint coexist without conflict, allowing multi-character scenes to render cleanly when all subjects are part of the model's training set. But externally trained embeddings—LoRAs built on top of other Stable Diffusion checkpoints or custom datasets—compete for the same latent space when loaded together. Anima's photorealism tuning may amplify this conflict, as higher-fidelity outputs leave less room for the kind of feature blending that lower-resolution models tolerate.
Multi-LoRA coherence has been a known challenge across Stable Diffusion fine-tunes since the LoRA format gained traction in late 2023. Pony Diffusion and SDXL-based checkpoints handle it with varying success, often requiring careful prompt weighting or regional masking to keep characters distinct. No workarounds have emerged in community testing so far.
