Skip to content(if available)orjump to list(if available)

Adobe Lightroom's AI Remove feature added a Bitcoin to bird in flight photo

Culonavirus

If you zoom in and squint your eyes, it does look like some kind of shiny coin.

What I'd like to know though... is how is the model so bad that when you tell it to "remove this artifact" ... instead of it looking at the surroundings and painting over with some DoF-ed out ocean... it slaps an even more distinct artifact in there? Makes no sense.

samsartor

A lot of current inpainting models have quite a lot of "signal leak". They're more for covering stuff vs removing it entirely.

Ironically, some older SD1/2-era models work a lot better for complete removal.

pclmulqdq

I have to say that I think the photo without the Ligthroom processing actually looks better. The second one hasn't just added a bitcoin, it has also added the "AI shimmer" that seems to be a part of a lot of generated images. I can't put my finger on exactly what the characteristic is, but my first instinct (separate from the bitcoin, which was hard to find) is "that's an AI picture." Someone should just spend some time in the blur tool if they don't like those glints.

serviceberry

I don't think there's any AI-fication going on in that photo. The modified version has a more compressed tone curve to bring out more detail, along with jacked up saturation (especially evident for water). This is similar to what most cell phones do by default to make photos look more pleasing.

I do agree that the original looks better, but the author of the post clearly prefers the modified version.

strogonoff

Clipped highlights in digital photography are simply impossible to eliminate in post-processing without conjuring nonexistent information. Even if you shoot raw. Different raw processors use different tricks, color propagation and such, but naturally they are best used when highlights are really small. I would not be surprised if tools like Lightroom invoke ML at the first hint of clipping (because why not, if you all you have is a hammer…).

Pro tip: Digital sensors are much less forgiving than negative film when it comes to exposing highlights. With a bit of foresight they are best tackled at shooting time. Highlights from water/glass reflections are tamed by a fairly cheap polarizing filter, and if you shoot raw you should do the opposite of negative film and always underexpose a scene with bright highlights (especially if highlights are large or are in your subject of interest). Let it be dark, you will have more noise, but noise is manageable without having to invent what doesn’t exist in the parts that are the most noticeable to human eye.

serviceberry

> Clipped highlights in digital photography are simply impossible to eliminate in post-processing without conjuring nonexistent information. Even if you shoot raw.

Huh? This used to be true twenty years ago, but modern sensors in prosumer cameras capture a lot more dynamic range than can be displayed on the screen or conveyed in a standard JPEG. If you shoot raw, you absolutely have the information needed to rescue nominally clipped highlights. You get 2-4 stops of latitude without any real effort.

The problem here is different. If the rest of the scene is exposed correctly, you have to choose one or another. Overexposed highlights or underexposed subject. The workaround is to use tone mapping or local contrast tricks, but these easily give your photos a weird "HDR" look.

DrewADesign

nah-- even in the supplied jpg the histogram shows they're not clipping the highlights, and if you crank down the brightness, you can see the detail.

pclmulqdq

Oh, yeah, compression of the dynamic range combined with increase brightness makes sense. It's not exactly just stable diffusion that produces that look, but also things like ML filters, etc.

ruraljuror

This reminds me of the soap-opera effect[0] on modern tvs. I have difficulty watching a movie on someone’s tv with it enabled, but they don’t even seem to notice.

0: https://en.wikipedia.org/wiki/Soap_opera_effect

johnnyanmac

A truly bizarre effect. One of the first times in my life I was ever thinking "wait, this looks TOO smooth. it's weird". As if my eyes just instinctively knew there were fake frames (before I understood the concept of "frames").

samsartor

That could be the VAE? The "latent" part of latent diffusion models is surprisingly lossy. And however much the image is getting inpainted, the entire thing gets encoded and decoded.

Edit: I'll note some new models (SD3 and Flux) have a wider latent dim and seem to suffer from this problem less.

AI generated images are also biased strongly towards medium lightness. The photographer's adjusting of the tone curve may simply give it that "look".

ripped_britches

“Large company ships half baked product” has gotta be the least interesting story to read

slg

Sure, if you view this as an isolated incident. But I think of it more as the latest example of the larger trend of how the industry has gone mad actively making their products worse with half-baked AI features. That is a more interesting story.

DrewADesign

And this is the closest thing the professional imaging world has to a broadly-available tool designed for high-end professional use cases. It's barely consistently good enough for throwaway blog headers in its current state for large edits, and for small edits it's exactly 0 percent better than the removal tools they added 20 years ago. Adobe better start giving some love to its professional users because the alternatives are getting better, and their ecosystem is getting worse. It's like they're trying to put themselves in a position where they're competing with Canva and Instagram rather than Affinity, Procreate, Rebelle, etc. If it's not clear to them yet that they're not right-around-the-corner from having their AI tools be a drop-in replacement for their regular stuff, they're gonna have a bad time.

thegeomaster

Is it actively worse, though? My impression is that all of the other, classical-based in-painting methods are still alive and well in Adobe products. And I think their in-painting works well, when it does work. To me, this honestly sounds like an improvement, especially in a creative tool like Lightroom or Photoshop --- the artist has more options to achieve their vision; it's usually understood to be up to the artist to use the tools appropriately.

DrewADesign

Unfortunately, the Adobe ecosystem on a whole is like OS-level complex, and they're pretty much ignoring anything that isn't directly related to generative AI, and that stuff is only consistently good for the lowest-level use cases. Miles more useful than Comfy, etc. for most artists and designers, but not close for people that need to do more skillful work. The value of Adobe as an ecosystem was their constant upgrading and keeping up with the times, and now, if it's not some cockamamie generative AI feature, it's going nowhere. They're even worse with bug fixes than they were before.

ryandrake

I'm not an artist or an Adobe customer, but when I see products adding half-baked or defective features, it tarnishes their brand and would definitely make me re-consider trusting their product. Especially for professional use, and regardless of whether the rest of the product still works fine. It's a general indicator of carelessness and tolerance of quality problems.

johnnyanmac

THe fact that we call it least interesting shows exactly how interesting it is that we just accept that companies are expected to ship broken slop.

jdoliner

It's telling you what it's mining in the background with those extra gpu cycles.

IAmGraydon

It’s almost like integrating a poorly understood black box with your software is a bad idea.

ttoinou

I dont see where in the picture it is zoomed from to see the bitcoin

jxi

You have to click into it as it's not visible from the preview.

crooked-v

Bottom left.

permo-w

it's in the second picture not the first

uberman

Down from the tip of the birds right wing near the very bottom of the image.

missing-acumen

Question to people knowing adobe lightroom, could this feature be compromised? Is this just doing API calls to some remote thing?

nshireman

Lightroom has a local Heal/Remove feature, and at least with LR Classic you have to tick a box for the AI remove, which processes it on Adobe servers.

As for whether it can be compromised... Probably? It sends all or some of your photo to a remove server, so that can certainly be taken.

missing-acumen

I mean, having the model behave this way looks too easy and I guess that adobe does qc on the features it releases, so I'm not sure to see an alternative explanation - or adobe's qc is poor/inexistent.

wmf

I'm not sure what you mean by compromised but I'm pretty sure Adobe Firefly AI features are server-based. These features are too good to be done locally.

missing-acumen

By compromised I mean something like someone having access to adobe's servers where this is running and uploading troll models or toying with the model's responses

jsheard

Plus even if it could be done locally, doing it server-side has the side benefit (for Adobe) of making it trivial to prevent pirates from ever being able to use those features.

benoau

They'll just add a disclaimer somewhere.

gedy

Heaven forbid your picture has a woman in it somewhere though, Adobe's AI will refuse to fill half the time.. I've taken to censoring out blocks of image with black squares of it has any body parts showing (still clothed), fill, copy, then undo the censoring. It's pretty ridiculous for a paid subscription.

null

[deleted]

486sx33

So HN, any theories on how this happened ?

jsheard

They used a circular mask and the model overfitted on Bitcoins as likely examples of circles? Adobe's models are only trained on their stock image library and they have a whopping 646,136 Bitcoin images.

https://stock.adobe.com/search?k=Bitcoin

mouse_

that seems like too many

jsheard

Correction, they have nearly a million Bitcoin images once you unfilter the ones tagged as AI generated, which are hidden by default. I assume they don't train their models on those though.

stefan_

All of them AI slop of course. They train on this? I guess it's garbage in, garbage out.

Gigachad

Seems like they used some AI tool to remove speckles from an image. The tool has to generate a likely replacement. And one of the speckles looked a bit like a coin.

lucb1e

That's the bit that puzzles me the most though: you want that bit gone, so why would it fill in what that shimmer looks like? If there's an airplane in the sky that you want gone from a medieval movie frame, so you select the airplane and select "remove", surely it doesn't fill in an airplane because that's the closest match for what's being selected?

I must be missing something obvious but I don't see it mentioned in the submission or comments, or perhaps I'm not making the connection

Gigachad

What does gone mean though? You aren’t just drawing a black or transparent spot underneath what you want removed. You’re filling it in with the most likely background. Which is not a trivial operation. It’s an AI generative fill which is obviously unpredictable. You’d expect it to generate ocean but it drew a bitcoin.

That’s just how AI generative fill works. You keep running it until it looks how you want.