Instructions to use Lightricks/LTX-2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Lightricks/LTX-2 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Lightricks/LTX-2", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Inference
- Notebooks
- Google Colab
- Kaggle
Model for rich people.
The System Requirements are unacceptable. You haven't made any optimization efforts, after all these months. This team thinks we're in Dubai or something. My pc is sufficient for most AI models, but obviously not for LTX. I can't even test low resolution. Immediate failure.
I run this model (fp8) on my PC: Ryzen 9 7900X3D, 64 GB DDR5, RTX 4070ti. It takes about 180s to create a 5s 720p@24 clip. And ... I'm not in Dubai))
The System Requirements are unacceptable. You haven't made any optimization efforts, after all these months. This team thinks we're in Dubai or something. My pc is sufficient for most AI models, but obviously not for LTX. I can't even test low resolution. Immediate failure.
You need to get better hardware for this sort of model. I recommend a real AI-capable system like an AMD Strix Halo system for example (if you don't want to take 2min to look for quantized models to fit lesser builds).
I run it on a rusty old 3090, and no problems at all, it works great ;-)
And super grateful to LTX team for releasing this model. (PS, I'm also not in Dubai;-)
You can also try Kijais nodes that might run a little easier on low end computer, and even has support for GGUF
https://huggingface.co/Kijai/LTXV2_comfy