Lightricks open-sources LipDub IC-LoRA for video dialogue replacement on LTX-2.3
LipDub is an open-source IC-LoRA adapter for LTX-2.3 that regenerates speech and lip motion in a single pass, preserving speaker appearance and vocal identity while replacing dialogue.
Lightricks released LipDub this week, an open-source IC-LoRA adapter for LTX-2.3 that regenerates both speech and lip motion in a single pass while preserving the speaker's appearance, vocal identity, and delivery.
The beta ships with 1080p output, support for clips up to eight seconds, and single-speaker workflows in English, French, Spanish, German, and Russian. Users feed the adapter a source video and a text prompt with replacement dialogue; LipDub preserves everything except the lip region. The release includes a ComfyUI workflow, a Python pipeline, and weights on HuggingFace.
Lightricks is positioning this as an early community release ahead of the API launch. The adapter is grounded in Video Dubbing via Joint Audio-Visual Diffusion, a research paper from Lightricks and Tel Aviv University that argues joint audio-visual generation outperforms modular pipelines where audio and video are synthesized separately.
Practical use cases include dubbing into another language, rephrasing dialogue in the original language, and talking-head generation workflows. The model is open-weight and runs locally, so practitioners can fine-tune or extend it without server-side restrictions. Lightricks has published the model card on HuggingFace and documentation on the LTX site.
