Skip to content(if available)orjump to list(if available)

FLUX.1 Kontext

FLUX.1 Kontext

121 comments

·May 29, 2025

xnorswap

I tried this out and a hilarious "context-slip" happened:

https://imgur.com/a/gT6iuV1

It generated (via a prompt) an image of a space ship landing on a remote planet.

I asked an edit, "The ship itself should be more colourful and a larger part of the image".

And it replaced the space-ship with a container vessel.

It had the chat history, it should have understood I still wanted a space-ship, but it dropped the relevant context for what I was trying to achieve.

gunalx

I mean ro its credit, one of the cobtainer ships seems to be flying. /s

minimaxir

Currently am testing this out (using the Replicate endpoint: https://replicate.com/black-forest-labs/flux-kontext-pro). Replicate also hosts "apps" with examples using FLUX Kontext for some common use cases of image editing: https://replicate.com/flux-kontext-apps

It's pretty good: quality of the generated images is similar to that of GPT-4o image generation if you were using it for simple image-to-image generations. Generation is speedy at about ~4 seconds per generation.

Prompt engineering outside of the examples used on this page is a little fussy and I suspect will evolve over time. Changing styles or specific aspects does indeed work, but the more specific you get, the more it tends to ignore the specifics.

a2128

It seems more accurate than 4o image generation in terms of preserving original details. If I give it my 3D animal character and ask it for a minor change like changing the lighting, 4o will completely mangle the face of my character, it will change the body and other details slightly. This Flux model keeps the visible geometry almost perfectly the same even when asked to significantly change the pose or lighting

echelon

gpt-image-1 (aka "4o") is still the most useful general purpose image model, but damn does this come close.

I'm deep in this space and feel really good about FLUX.1 Kontext. It fills a much-needed gap, and it makes sure that OpenAI / Google aren't the runaway victors of images and video.

Prior to gpt-image-1, the biggest problems in images were:

  - prompt adherence
  - generation quality
  - instructiveness (eg. "put the sign above the second door")
  - consistency of styles, characters, settings, etc. 
  - deliberate and exact intentional posing of characters and set pieces
  - compositing different images or layers together
  - relighting
Fine tunes, LoRAs, and IPAdapters fixed a lot of this, but they were a real pain in the ass. ControlNets solved for pose, but it was still awkward and ugly. ComfyUI was an orchestrator of this layer of hacks that kind of got the job done, but it was hacky and unmaintainable glue. It always felt like a fly-by-night solution.

OpenAI's gpt-image-1 solved all of these things with a single multimodal model. You could throw out ComfyUI and all the other pre-AI garbage and work directly with the model itself. It was magic.

Unfortunately, gpt-image-1 is ridiculously slow, insanely expensive, highly censored (you can't use a lot of copyrighted characters or celebrities, and a lot of totally SFW prompts are blocked). It can't be fine tuned, so you're suck with the "ChatGPT style" and (as is called by the community) the "piss filter" (perpetually yellowish images).

And the biggest problem with gpt-image-1 is because it puts image and text tokens in the same space to manipulate, it can't retain the exact precise pixel-precise structure of reference images. Because of that, it cannot function as an inpainting/outpainting model whatsoever. You can't use it to edit existing images if the original image mattered.

Even with those flaws, gpt-image-1 was a million times better than Flux, ComfyUI, and all the other ball of wax hacks we've built up. Given the expense of training gpt-image-1, I was worried that nobody else would be able to afford to train the competition and that OpenAI would win the space forever. We'd be left with only hyperscalers of AI building these models. And it would suck if Google and OpenAI were the only providers of tools for artists.

Black Forest Labs just proved that wrong in a big way! While this model doesn't do everything as well as gpt-image-1, it's within the same order of magnitude. And it's ridiculously fast (10x faster) and cheap (10x cheaper).

Kontext isn't as instructive as gpt-image-1. You can't give it multiple pictures and ask it to copy characters from one image into the pose of another image. You can't have it follow complex compositing requests. But it's close, and that makes it immediately useful. It fills a much-needed gap in the space.

Black Forest Labs did the right thing by developing this instead of a video model. We need much more innovation in the image model space, and we need more gaps to be filled:

  - Fast
  - Truly multimodal like gpt-image-1
  - Instructive 
  - Posing built into the model. No ControlNet hacks. 
  - References built into the model. No IPAdapter, no required character/style LoRAs, etc. 
  - Ability to address objects, characters, mannequins, etc. for deletion / insertion. 
  - Ability to pull sources from across multiple images with or without "innovation" / change to their pixels.
  - Fine-tunable (so we can get higher quality and precision) 
 
Something like this that works in real time would literally change the game forever.

Please build it, Black Forest Labs.

All of those feature requests stated, Kontext is a great model. I'm going to be learning it over the next weeks.

Keep at it, BFL. Don't let OpenAI win. This model rocks.

Now let's hope Kling or Runway (or, better, someone who does open weights -- BFL!) develops a Veo 3 competitor.

I need my AI actors to "Meisner", and so far only Veo 3 comes close.

qingcharles

When I first saw gpt-image-1 I was equally scared that OpenAI had used its resources to push so far ahead that more open models would be left completely in the dust for the significant future.

Glad to see this release. It also puts more pressure onto OpenAI to make their model less lobotomized and to increase its output quality. This is good for everyone.

whywhywhywhy

>Given the expense of training gpt-image-1, I was worried that nobody else would be able to afford to train the competition

OpenAI models are expensive to train because it’s beneficial for OpenAI models to be expensive and there is no incentive to optimize when they’re gonna run in a server farm anyway.

Probably a bunch of teams never bothered trying to replicate Dall-E 1+2 because the training run cost millions, yet SD1.5 showed us comparable tech can run on a home computer and be trained from scratch for thousands or fine tuned for cents.

meta87

this breakdown made my day thank you!

Im building a web based paint/image editor with ai inpainting etc

and this is going to be a great model to use price wise and capability wise

completely agree so happy its not any one of these big co’s controlling the whole space!

ttoinou

Your comment is def why we come to HN :)

Thanks for the detailed info

cuuupid

Honestly love Replicate for always being up to date. It’s amazing that not only do we live in a time of rapid AI advancement, but that every new research grade model is immediately available via API and can be used in prod, at scale, no questions asked.

Something to be said about distributors like Replicate etc that are adding an exponent to the impact of these model releases

meowface

I have no affiliation with either company but from using both a bunch as a customer: Replicate has a competitor at https://fal.ai/models and FAL's generation speed is consistently faster across every model I've tried. They have some sub-100 ms image gen models, too.

Replicate has a much bigger model selection. But for every model that's on both, FAL is pretty much "Replicate but faster". I believe pricing is pretty similar.

bfirsh

Founder of Replicate here. We should be on par or faster for all the top models. e.g. we have the fastest FLUX[dev]: https://artificialanalysis.ai/text-to-image/model-family/flu...

If something's not as fast let me know and we can fix it. ben@replicate.com

echelon

A16Z invested in both. It's wild. They've been absolutely flooding the GenAI market for images and videos with investments.

They'll have one of the victors, whoever it is. Maybe multiple.

minimaxir

That's less on the downstream distributors, more on the model developers themselves realizing that ease-of-accessibility of the models themselves on Day 1 is important for getting community traction. Locking the model exclusively behind their own API won't work anymore.

Llama 4 was another recent case where they explicitly worked with downstream distributors to get it working Day 1.

reissbaker

In my quick experimentation for image-to-image this feels even better than GPT-4o: 4o tends to heavily weight the colors towards sepia, to the point where it's a bit of an obvious tell that the image was 4o-generated (especially with repeated edits); FLUX.1 Kontext seems to use a much wider, more colorful palette. And FLUX, at least the Max version I'm playing around with on Replicate, nails small details that 4o can miss.

I haven't played around with from-scratch generation, so I'm not sure which is best if you're trying to generate an image just from a prompt. But in terms of image-to-image via a prompt, it feels like FLUX is noticeably better.

skipants

> Generation is speedy at about ~4 seconds per generation

May I ask on which GPU & VRAM?

edit: oh unless you just meant through huggingface's UI

zamadatix

The open weights variant is "coming soon" so the only option is hosted right now.

minimaxir

It is through Replicate's UI listed, which goes through Black Forest Labs's infra so would likely get the same results from their API.

sujayk_33

So here is my understanding of current native image generation scenario, I might be wrong so please correct me, I'm still learning it and I'd appreaciate the help.

First time native image gen was introduced in Gemini 1.5 Flash if I'm not wrong, and then OpenAI was released for 4o which took over the internet by Ghibli Art.

We have been getting good quality images from almost all image generators like Midjourney, OpenAI and other providers, but the thing that made it special was true "multimodal" nature of it. Here's what I mean

When you used to ask chatgpt to create an image, it will rephrase that prompt and internally send that prompt to Dalle, similarly gemini would send it to Imagen which were diffusion models and they had little to know context in your next response about what's there in the previous image

In native image generation, it understands Audio, Text and even Image tokens in the same model and need not to rely on diffusion models internally, I don't think both Openai and google has released how they've trained it but my guess is that it's partially auto-regressive and diffusion but not sure about it

claudefocan

This is not fully correct.

The people behind flux are the authors of stable diffusion paper that dates back to 2022.

Openai initially had dallee but stable diffusion was a massive improvement on dallee.

Then openai inspired itself from stable diffusion for gpt image

vunderba

Some of these samples are rather cherry picked. Has anyone actually tried the professional headshot app of the "Kontext Apps"?

https://replicate.com/flux-kontext-apps

I've thrown half a dozen pictures of myself at it and it just completely replaced me with somebody else. To be fair, the final headshot does look very professional.

mac-mc

I tried a professional headshot prompt on the flux playground with a tired gym selfie and it kept it as myself, same expression, sweat, skin tone and all. It was like a background swap, then I expanded it to "make a professional headshot version of this image that would be good for social media, make the person smile, have a good pose and clothing, clean non-sweaty skin, etc" and it stayed pretty similar, except it swapped the clothing and gave me an awkward smile, which may be accurate for those kinds of things if you think about it.

diggan

It isn't mentioned on https://replicate.com/flux-kontext-apps/professional-headsho..., but on https://replicate.com/black-forest-labs/flux-kontext-pro under the "Prompting Best Practices" section is says this:

> Preserve Intentionally

> Specify what should stay the same: “while keeping the same facial features”

> Use “maintain the original composition” to preserve layout

> For background changes: “Change the background to a beach while keeping the person in the exact same position”

So while the marketing seems to paint a picture that it'll preserve things automatically, and kind of understand exactly what you want changed, it doesn't seem like that's the full truth. You need to instead be very specific about what you want to preserve.

minimaxir

Is the input image aspect ratio the same as the output aspect ratio? In some testing I've noticed that there is weirdness that happens if there is a forced shift.

pkrx

It's convenient but the results are def not significantly better than available free stuff

doctorpangloss

Nobody has solved the scientific problem of identity preservation for faces in one shot. Nobody has even solved hands.

emmelaich

I tried making a realistic image from a cartoon character but aged. It did very well, definitely recognisable as the same 'person'.

danielbln

Best bet right now is still face swapping with something like insightface.

vunderba

I'm debating whether to add the FLUX Kontext model to my GenAI image comparison site. The Max variant of the model definitely scores higher in prompt adherence nearly doubling Flux 1.dev score but still falling short of OpenAI's gpt-image-1 which (visual fidelity aside) is sitting at the top of the leaderboard.

I liked keeping Flux 1.D around just to have a nice baseline for local GenAI capabilities.

https://genai-showdown.specr.net

Incidentally, we did add the newest release of Hunyuan's Image 2.0 model but as expected of a real-time model it scores rather poorly.

EDIT: In fairness to Black Forest Labs this model definitely seems to be more focused on editing capabilities to refine and iterate on existing images rather than on strict text-to-image creation.

nopinsight

Wondering if you could add “Flux 1.1 Pro Ultra” to the site? It’s supposed to be the best among the Flux family of models, and far better than Flux Dev (3rd among your current candidates) at prompt adherence.

Adding it would also provide a fair assessment for a leading open source model.

The site is a great idea and features very interesting prompts. :)

Klaus23

Nice site! I have a suggestion for a prompt that I could never get to work properly. It's been a while since I tried it, and the models have probably improved enough that it should be possible now.

  A knight with a sword in hand stands with his back to us, facing down an army. He holds his shield above his head to protect himself from the rain of arrows shot by archers visible in the rear.
I was surprised at how badly the models performed. It's a fairly iconic scene, and there's more than enough training data.

lawik

Making an accurate flail (stick - chain - ball) is a fun sport.. weird things tend to happen.

null

[deleted]

theyinwhy

Looks good! Would be great to see Adobe Firefly in your evaluation as well.

meta87

please add! cool site thanks :)

anjneymidha

liuliu

Seems implementation is straightforward (very similar to everyone else, HiDream-E1, ICEdit, DreamO etc.), the magic is on data curation (which details are lightly shared).

krackers

I haven't been following image generation models closely, at a high level is this new Flux model still diffusion based, or have they moved to block autoregressive (possibly with diffusion for upscaling) similar to 4o?

anotherpaul

Well it's a "generative flow matching model"

That's not the same as a diffusion model.

Here is a post about the difference that seems right at first glance: https://diffusionflow.github.io/

null

[deleted]

liuliu

Diffusion based. There is no point to move to auto-regressive if you are not also training a multimodality LLM, which these companies are not doing that.

rvz

Unfortunately, nobody wants to read the report, but what they are really after is to download the open-weight model.

So they can take it and run with it. (No contributing back either).

anjneymidha

"FLUX.1 Kontext [dev]

Open-weights, distilled variant of Kontext, our most advanced generative image editing model. Coming soon" is what they say on https://bfl.ai/models/flux-kontext

sigmoid10

Distilled is a real downer, but I guess those AI startup CEOs still gotta eat.

refulgentis

I agree that gooning crew drives a lot of open model downloads.

On HN, generally, people are more into technical discussion and/or productizing this stuff. Here, it seems declasse to mention the gooner angle, it's usually euphemized as intense reactions about refusing to download it involving the words "censor"

mdp2021

Is input restricted to a single image? If you could use more images as input, you could do prompts like "Place the item in image A inside image B" (e.g. "put the character of image A in the scenery of image B"), etc.

carlosdp

There's an experimental "multi" mode you can input multiple images to

echelon

Fal has the multi image interface to test against. (Replicate might as well, I haven't checked yet.)

THIS MODEL ROCKS!

It's no gpt-image-1, but it's ridiculously close.

There isn't going to be a moat in images or video. I was so worried Google and OpenAI would win creative forever. Not so. Anyone can build these.

ttoinou

How knowledgable do you need to be to tweak and train this locally ?

I spent two days trying to train a LoRa customization on top of Flux 1 dev on Windows with my RTX 4090 but can’t make it work and I don’t know how deep into this topic and python library I need to study. Are there scripts kiddies in this game or only experts ?

throwaway675117

Just use https://github.com/bghira/SimpleTuner

I was able to run this script to train a Lora myself without spending any time learning the underlying python libraries.

ttoinou

Well thank you I will test that

dagaci

SimpleTuner is dependant on Microsoft's DeepSpeed which doesnt work on Windows :)

So you probably better off using Ai-ToolKit https://github.com/ostris/ai-toolkit

minimaxir

The open-source model is not released yet, but it definitely won't be any easier than training a LoRA on Flux 1 Dev.

ttoinou

Damn, I’m just too lazy to learn skills that will be outdated in 6 months

Flemlo

It's normally easy to find it ore configured through comfyui.

Sometimes behind patreon if some YouTuber

3abiton

> I spent two days trying to train a LoRa customization on top of Flux 1 dev on Windows with my RTX 4090 but can’t make

Windows is mostly the issue, to really take advantage, you will need linux.

ttoinou

Even using WSL2 with Ubuntu isn't good enough ?

AuryGlenz

Nah, that’s fine. So is Windows for most tools.

The main thing is having 1. Good images with adequate captions and 2. Knowing what settings to use.

Number 2 is much harder because there’s a lot of bad information out there and the people who train a ton of Loras aren’t usually keen to share. Still, the various programs usually have some defaults that should be acceptable.

nullbyte

Hopefully they list this on HuggingFace for the opensource community. It looks like a great model!

vunderba

From their site they will be releasing the DEV version - which is a distilled variant - so quality and adherence will suffer unfortunately.

minimaxir

The original open-source Flux releases were also on Hugging Face.

layer8

> show me a closeup of…

Investigators will love this for “enhance”. ;)

mdp2021

At some point, "Do not let the tool invent details!" will become a shout more frequent than most expressions.

amazingamazing

Don’t understand the remove from face example. Without other pictures showing the persons face, it’s just using some stereotypical image, no?

sharkjacobs

There's no "truth" it's uncovering, no real face, these are all just generated images, yes.

amazingamazing

I get that but usually you would have two inputs, the reference “true”, and the target that it to be manipulated.

nine_k

Not necessarily. "As you may see, this is a Chinese lady. You have seen a number of Chinese ladies in your training set. Imagine the face of this lady so that it won't contradict the fragment visible on the image with the snowflake". (Damn, it's a pseudocode prompt.)

ilaksh

Look more closely at the example. Clearly there is an opportunity for inference with objects that only partially obscure.

vessenes

Mm, depends on the underlying model and where it is in the pipeline; identity models are pretty sophisticated at interpolating faces from partial geometry.

Scaevolus

The slideshow appears to be glitched on that first example. The input image has a snowflake covering most of her face.

whywhywhywhy

That's the point, it can remove it.

jorgemf

I think they are doing that because using real images the model changes the face. So that problem is removed if the initial image doesn't show the face

pkkkzip

They chosen Asian traits that Western beauty standards fetishize that in Asia wouldn't be taken serious at all.

I notice American text2image models tend to generate less attractive and more darker skinned humans where as Chinese text2image generate attractive and more light skinned humans.

I think this is another area where Chinese AI models shine.

throwaway314155

> notice American text2image models tend to generate less attractive and more darker skinned humans where as Chinese text2image generate attractive and more light skinned humans

This seems entirely subjective to me.

viraptor

> They chosen Asian traits that Western beauty standards fetishize that in Asia wouldn't be taken serious at all.

> where as Chinese text2image generate attractive and more light skinned humans.

Are you saying they have chosen Asian traits that Asian beauty standards fetishize that in the West wouldn't be taken seriously at all? ;) There is no ground truth here that would be more correct one way or the other.

turnsout

Wow, that is some straight-up overt racism. You should be ashamed.

fc417fc802

It reads as racist if you parse it as (skin tone and attractiveness) but if you instead parse it as (skin tone) and (attractiveness), ie as two entirely unrelated characteristics of the output, then it reads as nothing more than a claim about relative differences in behavior between models.

Of course, given the sensitivity of the topic it is arguably somewhat inappropriate to make such observations without sufficient effort to clarify the precise meaning.

astrange

Asians can be pretty colorist within themselves and they're not going to listen to you when you tell them it's bad. Asian women love skin-lightening creams.

This particular woman looks Vietnamese to me, but I agree nothing about her appearance looks like anyone's fashion I know. But I only know California ABGs so that doesn't mean much.

ilaksh

Anyone have a guess as to when the open dev version gets released? More like a week or a month or two I wonder.