Skip to content(if available)orjump to list(if available)

Arbitrary-Scale Super-Resolution with Neural Heat Fields

mrybczyn

hrm. on nature portrait photography 600x600 upscale, it has a LOT of artifacts. Perhaps too far out of distribution?

WhitneyLand

Seems like a nice result but wouldn’t have hurt for them to give a few performance benchmarks. I understand that the point of the paper was a quality improvement, but it’s always nice to reference a baseline for practicality.

vessenes

Not disagreeing, but the number of parameters are listed in the single digit millions size (which surprised me). So, I would expect this to be very fast on modern hardware.

flerchin

I'd like to see the results in something like Wing Commander Privateer.

nthingtohide

DLSS will benefit greatly from research in this area. DLSS 4 uses transformers.

DLSS 3 vs DLSS 4 (Transformer)

https://www.youtube.com/watch?v=CMBpGbUCgm4

adhoc32

Instead of training on vast amounts of arbitrary data that may lead to hallucinations, wouldn't it be better to train on high-resolution images of the specific subject we want to upscale? For example, using high-resolution modern photos of a building to enhance an old photo of the same building, or using a family album of a person to upscale an old image of that person. Does such an approach exist?

0x12A

Author here -- Generally in single image super-resolution, we want to learn a prior over natural high-resolution images, and for that a large and diverse training set is beneficial. Your suggestion sounds interesting, though it's more reminiscent of multi image super-resolution, where additional images contribute additional information, that has to be registered appropriately.

That said, our approach is actually trained on a (by modern standards) rather small dataset, consisting only of 800 images. :)

MereInterest

Not a data scientist, but my understanding is that restricting the set of training data for the initial training run often results in poorer inference due to a smaller data set. If you’re training early layers of a model, you’re often recognizing rather abstract features, such as boundaries between different colors.

That said, there is a benefit to fine-tuning a model on a reduced data set after the initial training. The initial training with the larger dataset means that it doesn’t get entirely lost in the smaller dataset.

jiggawatts

The learned frequency banks reminded me of a notion I had: Instead of learning upscaling or image generation in pixel space, why not reuse the decades of effort that has gone into lossy image compression by generating output in a psychovisually optimal space?

Perhaps frequency space (discrete cosine transform) with a perceptually uniform color space like UCS. This would allow models to be optimised so that they spend more of their compute budget outputting detail that's relevant to human vision. Color spaces that split brightness from chroma would allow increased contrast detail and lower color detail. This is basically what JPG does.

crazygringo

> by generating output in a psychovisually optimal space? Perhaps frequency space (discrete cosine transform)

I've never understood the DCT to be psychovisually optimal at all. At lower bitrates, it degrades into ringing and blockiness that don't match a "simplified perception" at all.

The frequency domain models our auditory space well, because our ears literally process frequencies. Bringing that over to the visual side has never been about "psychovisual modeling" but about existing mathematical techniques that happen to work well, despite their glaring "psychovisual" flaws.

On the other hand, yes a HSV color space could make more sense than RGB, for example. But I'm not sure it's going to provide a significant savings? I'd certainly be curious. It also might create problems though, because hue is undefined when saturation is zero, saturation is undefined when brightness is zero, etc. It's not smooth and continuous at the edges the way RGB is. And while something like CIELAB doesn't have that problem, you have the problem of keeping valid value combinations "in bounds".

pizza

JPEG is good for when you want a picture to look reasonably good while throwing away ~90-95% of the data. In fact, there's a relatively new JPEG variant that lets you get even better psychovisual fidelity for the same compression level by just doing JPEG in the XYB color space, xybjpeg. JPEG is also a very simple algorithm, when compared to the ones that'd be noticeably better near 99% compression.

To beat blockiness/banding across very gradually varying color gradients (think eg the gradient of a blue sky), JPEG XL has to whip out a lot of tricks, like handling sub-LF DCT coefficients between blocks, heterogeneous block sizes, deblocking filters for smoothing, and heterogeneous quantization maps.

BTW, one of the ways different camera manufacturers aimed to position themselves as having cameras that generated the best pictures was by using custom proprietary quantization tables to optimize for psychovisual quality.

mturnshek

You may already know this, but image generators like Stable Diffusion and Flux already do this in the form of “latent diffusion”.

Rather than operate on pixel space directly, they learn to operate on images that have been encoded by a VAE (latents). To generate an image with them, you run the reverse diffusion (actually flow in the case of flux) process they’ve learned and then decode the result using the VAE.

These VAE encoded latent images are 8x smaller in width/height and have 4 channels in the case of Stable Diffusion and 16 in the case of Flux.

I do think it would be more useful if it worked more like you said, though - if the channels weren’t encoded arbitrarily but some of them had pretty clear, useful human meaning like lightness, it would be another hook to control image generation.

To some extent, you can control the existing VAE channels, but it is pretty finicky.

sigmoid10

If there's one thing that neural networks have shown, it's that they are much better at picking up encoding patterns for realistic tasks than humans. There are so many aspects that could be used in dimensional reduction tasks that it seems pretty wild that we've come this far with human-designed patterns. From a top down engineering perspective, it might seem like a disadvantage to have algorithms that are not tailored to particular cases. But when you want things like general purpose image generation, it's simply much more economical to let ML figure out which dimensions to focus on. Because humans would spend years coming up with the details of certain formats and still not cover half the cases.

null

[deleted]

dahart

Interesting thoughts! First thing to mention is that if you look at the code, it uses SSIM, which is a perceptual image metric. Second is that it may be using sRGB, which isn’t a perceptually uniform color space, but is closer to one than linear RGB. I say that simply because most images these days are sRGB encoded. Whether Thera is depends on the dataset.

Aren’t Thera’s frequency banks pretty darn close to DCT or Fourier transform already? This is a frequency space decomposition & reconstruction, and their goal is similar to JPG in that it aims to capture the low frequencies accurately, and skimp on the frequencies that matter less, either by being less visible or lead to error (aliasing artifacts). It doesn’t seem entirely accurate to frame this paper as learning in pixel space.

As far as perceptual color spaces, yeah that might be worth trying. It’s not clear exactly what the goal is or how it would help, but it might. Thera does use the same color spaces that JPG encoding uses: RGB and YCbCr, which are famously bad. Perceptual color spaces save some bits in the file format, and like frequency space, they are convenient and help with perceptual decisions, but it’s less common to see them used to save work, at least outside of research. Notably, image generation often needs to work in linear color space anyway, and convert to a perceptual color space at the end. For example, CG rendering is all done in linear space, even when using a perceptual color metric to guide adaptive sampling.

Another question worth asking is whether in general a neural network already learns the perceptual factors. When it comes to black box training, if the data and loss function capture what a viewer needs to see, then the network will likely learn what it needs and use it’s own notion of perceptual metrics in it’s latent space. In that case, it may not help to use inputs and output that are encoded in a perceptual space, and we might be making incorrect assumptions.

In this case with Thera, the paper’s goal may be difficult to pin down perceptually. Doesn’t the arbitrary in ‘arbitrary-scale super resolution’ toss viewing conditions and the notion of an ideal viewer out the window? If we don’t even want to know what the solid angle of a pixel is, we can’t know very much about how they’re perceived.

pizza

We do, see eg LPIPS loss

littlestymaar

> why not reuse the decades of effort that has gone into lossy image compression by generating output in a psychovisually optimal space

I've been wondering exactly this for a while, if somebody more knowledgeable knows why we're not doing that I'd be happy to hear it.

flufluflufluffy

Was anyone else expecting an infinitely zoomable pictures from that title? I am disappoint

seanalltogether

I would love to see this kind of work applied to old movies from the 30s and 40s like the Marx Brothers.

throwaway2562

Just curious: why?

It wouldn’t be more funny ha-ha, just more funny strange.

Hizonner

Where are the ground truth images?

WhitneyLand

Click through to the actual paper and they are in the last column labeled “GT”.