FLUX.2: Frontier Visual Intelligence
30 comments
·November 25, 2025visioninmyblood
anjneymidha
they released a research post on how the new model's VAE was trained here: https://bfl.ai/research/representation-comparison
spyder
Great, especially that they still have an open-weight variant of this new model too. But what happened to their work on their unreleased SOTA video model? did it stop being SOTA, others got ahead, and they folded the project, or what? YT video about it: https://youtu.be/svIHNnM1Pa0?t=208 They even removed the page of that: https://bfl.ai/up-next/
liuliu
As a startup, they pivoted and focused on image models (they are model providers, and image models often have more use cases than video models, not to mention they continue to have bigger image dataset moat, not video).
geooff_
Their published benchmarks leave a lot to be desired. I would be interested in seeing their multi-image performance vs. Nano Banana. I just finished up benchmarking Image Editing models and while Nano Banana is the clear winner for one-shot editing its not great at few-shot.
minimaxir
The issue with testing multi-image with Flux is that it's expensive due to its pricing scheme ($0.015 per input image for Flux 2 Pro, $0.06 per input image for Flux 2 Flex: https://bfl.ai/pricing?category=flux.2) while the cost of adding additional images is neligible in Nano Banana ($0.000387 per image).
In the case of Flux 2 Pro, adding just one image increases the total cost to be greater than a Nano Banana generation.
542458
> Run FLUX.2 [dev] on GeForce RTX GPUs for local experimentation with an optimized fp8 reference implementation of FLUX.2 [dev], created in collaboration with NVIDIA and ComfyUI.
Glad to see that they're sticking with open weights.
That said, Flux 1.x was 12B params, right? So this is about 3x as large plus a 24B text encoder (unless I'm misunderstanding), so it might be a significant challenge for local use. I'll be looking forward to the distill version.
minimaxir
Looking at the file sizes on the open weights version (https://huggingface.co/black-forest-labs/FLUX.2-dev/tree/mai...), the 24B text encoder is 48GB, the generation model itself is 64GB, which roughly tracks with it being the 32B parameters mentioned.
Downloading over 100GB of model weights is a tough sell for the local-only hobbyists.
_ache_
Even a 5090 can handle that. You have to use multiple GPUs.
So the only option will be [klein] on a single GPU... maybe? Since we don't have much information.
BadBadJellyBean
Never mind the download size. Who has the VRAM to run it?
notrealyme123
> The FLUX.2 - VAE is available on HF under an Apache 2.0 license.
anyone found this? To me the link doesn't lead to the model
xnx
Good to see there's some competition to Nano Banana Pro. Other players are important for keeping the price of the leaders in check.
mlnj
Also happy to see European players doing it.
minimaxir
Text encoder is Mistral-Small-3.2-24B-Instruct-2506 (which is multimodal) as opposed to the weird choice to use CLIP and T5 in the original FLUX, so that's a good start albeit kinda big for a model intended to be open weight. BFL likely should have held off the release until their Apache 2.0 distilled model was released in order to better differentiate from Nano Banana/Nano Banana Pro.
The pricing structure on the Pro variant is...weird:
> Input: We charge $0.015 for each megapixel on the input (i.e. reference images for editing)
> Output: The first megapixel is charged $0.03 and then each subsequent MP will be charged $0.015
woadwarrior01
> BFL likely should have held off the release until their Apache 2.0 distilled model was released in order to better differentiate from Nano Banana/Nano Banana Pro.
Qwen-Image-Edit-2511 is going to be released next week. And it will be Apache 2.0 licensed. I suspect that was one of the factors in the decision to release FLUX.2 this week.
minimaxir
Fair point.
kouteiheika
> as opposed to the weird choice to use CLIP and T5 in the original FLUX
Yeah, CLIP here was essentially useless. You can even completely zero the weights through which the CLIP input is ingested by the model and it barely changes anything.
beernet
Nice catch. Looks like engineers tried to take care of the GTM part as well and (surprise!) messed it up. In any case, the biggest loser here is Europe once again.
throwaway314155
> as opposed to the weird choice to use CLIP and T5 in the original FLUX
This method was used in tons of image generation models. Not saying it's superior or even a good idea, but it definitely wasn't "weird".
AmazingTurtle
I ran "family guy themed cyberpunk 2077 ingame screenshot, peter griffin as main character, third person view, view of character from the back" on both nano banana pro and bfl flux 2 pro. The results were staggering. The google model aligned better with the cyberpunk ingame scene, flux was too "realistic"
Yokohiii
18gb 4 bit quant via diffusers. "low vram setup" :)
DeathArrow
We probably won't be able to run it on regular PCs, even with a 5090. So I am curious how good the results will be using a quntized version.
echelon
> Launch Partners
Wow, the Krea relationship soured? These are both a16z companies and they've worked on private model development before. Krea.1 was supposed to be something to compete with Midjourney aesthetics and get away from the plastic-y Flux models with artificial skin tones, weird chins, etc.
This list of partners includes all of Krea's competitors: HiggsField (current aggregator leader), Freepik, "Open"Art, ElevenLabs (which now has an aggregator product), Leonardo.ai, Lightricks, etc. but Krea is absent. Really strange omission.
I wonder what happened.
DeathArrow
If this is still a diffusion model, I wonder how well does it compare with NanoBanana.
The model looks good for an open source model. I want to see how these models are trained. may be they have a base model from academic datasets and quickly fine-tune with models like nano banana pro or something? That could be the game for such models. But great to see an open source model competing with the big players.