Skip to content(if available)orjump to list(if available)

Super-resolution of Sentinel-2 images (10M –> 5M)

DoctorOetker

pff making up details X2 in both directions... could at least have done real synthetic aperture calculations...

curiousObject

The image sensor samples different light wavelengths with a time offset of about 250ms, as the satellite moves over the Earth.

I think that means it could be possible to enhance the resolution by using luminance data from one wavelength to make an ‘educated guess’ at the luminance of other wavelengths. It would be a more advanced version of the kind of interpolation that standard image sensor cameras do with a bayer color filter

So it seems possible to get some extra information out of the system, with a good likelihood of success, but some risk of point hallucinations.

The image sensor and filters are quite complex. Much more complicated than a simple bayer filter CCD/CMOS sensor. It is not AFAIK a moving filter, but a fixed one, however the satellite is obviously moving.

I don’t know if the ‘Super-Resolution’ technique in the OP is taking advantage of that possibility though. I agree it would be disappointing if it’s just guessing —- although perhaps a carefully well-trained ML system would still figure out how to use the available data as I’ve suggested.

the optical Multi-Spectral Instrument (MSI) samples 13 spectral bands: four bands at 10 m, six bands at 20 m and three bands at 60 m spatial resolution

Due to the particular geometrical layout of the focal plane, each spectral band of the MSI observes the ground surface at different times.

https://sentiwiki.copernicus.eu/web/s2-mission

I’m making some guesses, because I don’t understand most of the optics and camera design which that ESA page describes. For instance if anyone can explain why there’s a big ~250ms offset between measuring different light wavelengths, despite the optics and filters being fixed in place immobile relative to each other? Thank you.

The time per orbit is about 100 minutes. Sun-synchronous orbit.

Actually there are 3 satellites. The constellation is supposed to be 2, there’s currently a spare one as well. But the orbits are very widely separated, supposed to be on opposite sides of the planet, so I don’t know how much enhancement there could be from combining the images from all the satellites. And don’t know if the OP’s method even tries that.

Anyway, the folks at ESA working with Sentinel-2/Copernicus must have already thought very hard about anything they can do to enhance these images, surely?

Edit: The L1BSR project which is linked to from the OP git page does include ‘exploiting sensor overlap’! So I assume it really is doing a process similar to what I’ve suggested

RF_Savage

Yeah...

RicoElectrico

Sentinel 2 images are not exactly lined up for different revisits of the same spot. There are minute, yet perceptible subpixel offsets. If there is sufficient aliasing in the system, it should be theoretically possible to extract information from multiple visits. However the linked repo doesn't appear to do that.