AsymFLUX.2-klein-9B: open-weight pixel-space model with asymmetric flow training
Lakonik released AsymFLUX.2-klein-9B, a 9-billion-parameter open-weight text-to-image model fine-tuned from FLUX.2-klein using a new asymmetric flow matching technique.
Lakonik released AsymFLUX.2-klein-9B, a 9-billion-parameter open-weight text-to-image model fine-tuned from Black Forest Labs' FLUX.2-klein-base. The checkpoint, paired with an arXiv preprint and implementation code, introduces the AsymFlow training method — a technique that trains separate forward and reverse paths with different parameterizations rather than learning a single shared velocity field.
Unlike latent diffusion models, AsymFLUX.2-klein operates in pixel space, generating images directly at native resolution without a VAE decode step. The trade-off is higher VRAM consumption during inference, but the model skips the compression-decompression cycle entirely. The accompanying paper argues that asymmetry between forward and reverse processes reduces the mismatch between training and inference dynamics in flow-based models. Traditional flow matching learns one velocity field to map noise to data; AsymFlow trains distinct paths, which the authors claim yields sharper outputs and faster convergence.
Training and deployment
The full training recipe, model card, and inference code are available on HuggingFace and GitHub. Because the weights are open and run locally, the model can be fine-tuned or used without safety restrictions. The checkpoint integrates with standard diffusion inference pipelines and is immediately usable for experimentation or downstream training.
