LTXVideo 13B AI video generation
54 comments
·May 10, 2025statusreport
qwertox
Surprising to see that you don't know who made this page. When I saw it I wanted to know who was behind this, so I found "© 2025 Lightricks. All rights reserved." at the end of the page, where Google led me to lightricks.com. I had my bets on a Chinese company, but I was wrong.
I am surprised that it can run on consumer hardware.
yorwba
There's a bit of a trend of impersonating newly launched AI products. Just check out the submitter's other projects: https://news.ycombinator.com/submitted?id=zoudong376
I assume it's for SEO or supply chain attacks or overcharging for subscriptions.
liuliu
Hi! Draw Things should be able to add support in the next 2 weeks after we get video feature a little bit more polished out with existing video models (Wan 2.1, Hunyuan etc).
ronreiter
I work with the Lightricks team.
This is not an official page created by Lightricks, and we do not know who the owner of this page is or why he created it.
xg15
what's going on here?
simonw
This is so weird. The domain has whois information withheld and the site is hosted on Vercel.
Best hint is the submission history of https://news.ycombinator.com/submitted?id=zoudong376 which shows similar unofficial sites for other projects.
snagadooker
I'm not certain what that website is or whether it's affiliated with the model developers.
For more information about the model, refer to these sources:
Model repository: https://github.com/Lightricks/LTX-Video
ComfyUI integration: https://github.com/Lightricks/ComfyUI-LTXVideo
Early community LoRAs: https://huggingface.co/Lightricks/LTXV-LoRAs
Banadoco Discord server, an excellent place for discussing LTXV and other open models (Wan/Hunyuan):
https://discord.com/channels/1076117621407223829/13693260067...
sigmoid10
OP seems to be making tons of these "fan" pages for AI tools according to his HN submission history. It's also the same design every time. Smells fishy.
coldcode
Every browser I tried on my Mac does not show any of the videos. You only see the top animation.
Also shown: cdn.tailwindcss.com should not be used in production. To use Tailwind CSS in production, install it as a PostCSS plugin or use the Tailwind CLI: https://tailwindcss.com/docs/installation
There are a couple of JS errors, which I presume keep the videos from appearing.
jsheard
That's the least of the problems with how they've optimized their assets, there's about 250MB of animated GIFs on the Huggingface page (actual 1989 vintage GIFs, not modern videos pretending to be GIFs). AI people just can't get enough of wasting bandwidth apparently, at least this time it's another AI company footing the bill for all the expensive AWS egress they're burning through for no reason.
_345
this is the tech equivalent of being upset that someone forgot to also recycle the aluminum cap that came with their glass bottle
dingdingdang
Super AI tech to the rescue!
soared
On iOS the unmute button will unmute and play the video. The play button did not work for me.
hobs
https://pub.wanai.pro/ltxv_hero.mp4 same, but the video does work, just some problems in the site
esafak
wanai says it uses Alibaba's Wan2.1 video generation model. What's going on here? It LTXV somehow related? https://huggingface.co/blog/LLMhacker/wanai-wan21
simonw
> Is LTXV-13B open source?
> Yes, LTXV-13B is available under the LTXV Open Weights License. The model and its tools are open source, allowing for community development and customization.
UPDATE: This is text on an unofficial website unaffiliated with the project. BUT https://www.lightricks.com/ has "LTXV open source video model" in a big header at the top of my page so my complaint still stands, even though the FAQ copy I'm critiquing here is likely not the fault of Lightricks themselves.
So it's open weights, not open source.
Open weights is great! No need to use the wrong term for it.
From https://static.lightricks.com/legal/LTXV-2B-Distilled-04-25-... it looks like the key non-open-source terms (by the OSI definition which I consider to be canon) are:
- Section 2: entities with annual revenues of at least $10,000,000 (the “Commercial Entities”) are eligible to obtain a paid commercial use license, subject to the terms and provisions of a different license (the “Commercial Use Agreement”)
- Section 6: To the maximum extent permitted by law, Licensor reserves the right to restrict (remotely or otherwise) usage of the Model in violation of this Agreement, update the Model through electronic means, or modify the Output of the Model based on updates
This is an easy fix: change that FAQ entry to:
> Is LTXV-13B open weights?
> Yes, LTXV-13B is available under the LTXV Open Weights License. The model is open weights and the underlying code is open source (Apache 2.0), allowing for community development and customization.
Here's where the code became Apache 2.0 6 months ago: https://github.com/Lightricks/LTX-Video/commit/cfbb059629b99...
jeroenhd
Are weights even copyrightable? I'm not sure what these licenses do, other than placate corporate legal or pretend to have some kind of open source equivalent for AI stuff.
terhechte
It says `Coming Soon` for the `inference.py` for the quantized version. Does anyone happen to know how to modify the non-quantized version [0] to work?
[0] https://github.com/Lightricks/LTX-Video/blob/main/configs/lt...
givinguflac
The requirements say:
NVIDIA 4090/5090 GPU 8GB+ VRAM (Full Version)
I have a 3070 w 8GB of VRAM.
Is there any reason I couldn’t run it (albeit slower) on my card?
washadjeffmad
Sure, just offload to system RAM, and don't use your system driver's fallback, but a specific implementation like MultiGPU.
It won't speed it up, but using a quantization that fits in VRAM will prevent the offload penalty.
mycall
Will this work with ROCm instead of CUDA?
turnsout
Or MLX/Apple?
echelon
No way. AMD is lightyears behind in software support.
roenxi
That isn't really what being behind implies. We've known how to multiply matrices since ... at least the 70s. And video processing isn't a wild new task for our friends at AMD. I'd expect that this would run on an AMD card.
But I don't own an AMD card to check, because when I did it randomly crashed too often doing machine learning work.
zorgmonkey
Sometimes it is a little more work to get stuff setup, but it works fine I've run plenty of models on my 7900 XTX wan2.1 14B, flux 1.dev and whisper. (wan and flux were with comfyui and whisper with whisper.cpp)
snagadooker
2B model was running well on AMD, fingers crossed with 13B too: https://www.reddit.com/user/kejos92/comments/1hjkkmx/ltxv_in...
null
Zambyte
Specifically for video? Ollama runs great on my 7900 XTX.
GTP
Just try it and see.
null
pwillia7
Will have to test this out and it looks like it runs on consumer hardware which is cool. I tried making a movie[1] with LTXV several months ago and had a good time but 30x faster generations sounds necessary.
shakna
> Hi , i'm using default image to video workflow with default settings and i'm getting pixalated image to video output full of squares , how to fix this ?
jl6
The example videos look very short, maybe 1-2 seconds each. Is that the limit?
snagadooker
The model supports both multi-scale rendering and autoregressive generation. With multi-scale rendering, you can generate a low-resolution preview of 200-300 frames and then upscale to higher resolutions (with or without tiling).
The autoregressive generation feature allows you to condition new segments based on previously generated content. A ComfyUI implementation example is available here:
https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/e...
moralestapia
It runs on a single consumer GPU.
Wow.
turnsout
I just tried out the model via LTX Studio, and it's extremely impressive for a 13B model, let alone one that allegedly performs in real-time.
Co-founder & CTO of Lightricks here. Cool to see our new model gaining traction on HN!
If you’re looking for the official LTXV model and working ComfyUI flows, make sure to visit the right sources:
- Official site: https://www.lightricks.com
- Model + Playground: https://huggingface.co/Lightricks/LTX-Video
The LTXV model runs on consumer GPUs, and all ComfyUI flows should work reliably from these official resources. Some third-party sites (like ltxvideo.net or wanai.pro) are broken, misconfigured, or heavy on unnecessary assets—so stick to the official ones to avoid issues and missing content.