Cloud GPU rentals emerge as affordable path for LTX 2.3 video work post-Sora
Following Sora's API closure, creators are turning to hourly cloud GPU rentals to run open-weight video models like LTX 2.3, finding the economics favor temporary access over local hardware purchases.
The shutdown of Sora's public API has pushed video creators toward cloud GPU rentals for open-weight video synthesis models. LTX 2.3, an open-weight video diffusion model, is emerging as a practical alternative for users who can't justify buying local hardware or paying closed-API pricing.
Cloud GPU providers like RunPod, Vast.ai, and Lambda Labs rent H100 and A100 instances by the hour, typically $1–3 per hour for consumer-grade inference workloads. LTX 2.3 runs on 24GB VRAM cards, putting it within reach of mid-tier rentals. Users report stable generation times of 2–4 minutes per 5-second clip at 720p on an A6000, making hourly rentals viable for batch work.
Rental versus ownership
The math favors rentals for sporadic use. A local RTX 4090 costs $1,600–2,000 upfront; at $2 per hour cloud rates, that's 800–1,000 hours of compute before break-even. For creators producing a few videos per week, rentals stay cheaper for months. The trade-off is setup overhead—each session requires re-installing dependencies and transferring checkpoints, adding 10–15 minutes per job.
Open-weight models like LTX 2.3 and Wan Video carry no per-generation API fees, unlike closed services that charge per second of output. That cost structure makes them natural fits for rental workflows, where users pay only for active GPU time and avoid both capital expense and recurring subscription tiers.
