Liquid: Language models are scalable and unified multi-modal generators
6 comments
·April 15, 2025gwern
> For the first time, Liquid uncovers a scaling law that performance drop unavoidably brought by the unified training of visual and language tasks diminishes as the model size increases...No prior work has explored whether LLMs retain the power-law scaling laws observed in language tasks when extended to visual generation tasks. We prove this alignment and further show that vision can be effectively learned by LLMs as a form of language.
Does this really show much that https://arxiv.org/abs/2301.03728#facebook (uncited) and other earlier work did not?
swyx
hmm this is a tough name - conflicts with Liquid AI https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Nijikokun
it performs well with composition, however it seems SD and SDXL excels in capability and quality when intermixed with pipelines and workflows, this doesn't do much to talk about that comparison and whenever i see things like this i think about the overall workflow, like cool you do good composition but you don't fit within the workflow or ecosystem that surrounds that tool and thus i have low expectations around adoption
I love the website for this paper! Each section asks a question, and immediately answers it with a figure and a few sentences of discussion. It's less tech-demo heavy than a lot of other paper websites (those are cool, too, in their own way), and instead focuses on characterizing multimodal model behavior in a nice, clean, disciplined way.