Skip to content(if available)orjump to list(if available)

A new PNG spec

A new PNG spec

425 comments

·June 24, 2025

joshmarinacci

A fun trick I do with my web based drawing tools is to save a JSON representation of your document as a comment field inside of a PNG. This way the doc you save is immediately usable as an image but can also be loaded back into the editor. Also means your downloads folder isn’t littered with unintelligible JSON files.

dragonwriter

This is also what many AI image gen frontends do, saving the generation specs as comments so you can open the image and get prompt and settings (or, for, e.g., ComfyUI, full workflows) loaded to tweak.

Really, I think its pretty common for tools that work with images generally.

speps

Macromedia Fireworks did it 20 years ago, the PNG was the default save format. Of course, it wasn’t JSON stored in there…

dtech

A fun trick, but I wouldn't want to explain to users why their things are saved as a .png, not why their things is lost after they opened and saved the PNG in Paint.

KetoManx64

If a user is using paint to edit their photos, they're 100% not going to be interested in having the source document to play around with.

IvanK_net

Macromedia did this when saving Fireworks files into PNG.

Also, Adobe saves AI files into a PDF (every AI file is a PDF file), and Photoshop can save PSD files into TIFF files (people wonder why these TIFFs have several layers in Photoshop, but just one layer in all other software).

giancarlostoro

> Macromedia did this when saving Fireworks files into PNG. I forgot about this..

Fireworks was my favorite image editor, I don't know that I've ever found one I love as much as I loved Fireworks. I'm not a graphics guy, but Fireworks was just fantastic.

shiryel

That is also how Krita stores brushes. Unfortunately, that can cause some unexpected issues when there's too much data [1][2].

[1] - https://github.com/Draneria/Metallics-by-Draneria_Krita-Brus...

[2] - https://krita-artists.org/t/memileo-impasto-brushes/92952/11...

oakwhiz

If a patch is needed for libpng to get around the issue, maybe Krita should vendor libpng for usability. It's not unreasonable for people to want to create gigantic files like this.

neuronexmachina

This would be great for things like exported Mermaid diagrams.

osetnik

> save a JSON representation of your document as a comment field inside of a PNG

Can you compress it? I mean, theoretically there is this 'zTXt' chunk, but it never worked for me, therefore I'm asking.

tomtom1337

Could you expand on this? It sounds a bit preposterous to save a text, as json, inside an image - and then expect it to be immediately usable… as an image?

LeifCarrotson

They're not saving text, they're saving an idea - a "map" or a "CAD model" or a "video game skin" or whatever.

Yes, a hypothetical user's sprinker layout "map" or whatever they're working on is actually composed of a few rectangles that represent their house, and a spline representing the garden border, and a circle representing the tree in the front yard, and a bunch of line segments that draw the pipes between the sprinkler heads. Yes, each of those geometric elements can be concisely defined by JSON text that defines the X and Y location, the length/width/diameter/spline coordinates or whatever, the color, etc. of the objects on the map. And yes, OP has a rendering engine that can turn that JSON back into an image.

But when the user thinks about the map, they want to think about the image. If a landscaping customer is viewing a dashboard of all their open projects, OP doesn't want to have to run the rendering engine a dozen times to re-draw the projects each time the page loads just to show a bunch of icons on the screen. They just want to load a bunch of PNGs. You could store two objects on disk/in the database, one being the icon and another being the JSON, but why store two things when you could store one?

bitpush

Not OP, but PNG (and most image/video formats) allows metadata and most allows arbitrary fields. Good parsers know to ignore/safely skip over fields that they are not familiar with.

https://dev.exiv2.org/projects/exiv2/wiki/The_Metadata_in_PN...

This is similar to HTTP request headers, if you're familiar with that. There are a set of standard headers (User-Agent, ETag etc) but nobody is stopping you from inventing x-tomtom and sending that along with HTTP request. And on the receiving end, you can parse and make use of it. Same thing with PNG here.

woodrowbarlow

this is useful for code that renders images (e.g. data-visualization tools). the image is the primary artifact of interest, but maybe it was generated from data represented in JSON format. by embedding the source data (invisibly) in the image, you can extract it later to modify and re-generate.

chown

Save text as JSON as comments but the file itself is a PNG so that you can use it as an image (like previewing it) as they would ignore the comments. However, the OP’s editor can load the file back, parse the comments, and get the original data and continue to edit. Just one file to maintain. Quite clever actually.

meindnoch

Check what draw.io does when you download a PNG.

behnamoh

no, GP meant they add the JSON text to the meta data of the image as comment.

ksec

It is just a spec on something widely implemented already.

Assuming Next gen PNG will still require new decoder. They could just call it PNG2.

JPEG-XL already provides everything most people asked for a lossless codec. If there are any problems it is its encoding and decoding speed and resources.

Current champion of Lossless image codec is HALIC. https://news.ycombinator.com/item?id=38990568

thesz

HALIC discussion page [1] says otherwise.

[1] https://encode.su/threads/4025-HALIC-(High-Availability-Loss...

It looks like LEA 0.5 is the champion.

And HALIC is not even close to ten in this [2] lossless image compression benchmark.

[2] https://github.com/WangXuan95/Image-Compression-Benchmark

poly2it

It lools like HALIC offers very impressive decode speeds within its compression range.

Aloisius

I'll be honest, I ignored JPEG XL for a couple years because I assumed that it was merely for extra large images.

voxleone

I'm using png in a computer vision image annotation tool[0]. The idea is to store the class labels directly in the image [dispensing with the side car text files], taking advantage of the beautiful png metadata capabilities. The next step is to build a specialized extension of the format for this kind of task.

[0]https://github.com/VoxleOne/XLabel

illiac786

> If there are any problems it is its encoding and decoding speed and resources.

And this will improve over time, like jpg encoders and decoders did.

ksec

I hope I am very wrong but this isn't given. In the past reference encoder and decoder do not concern about speed and resources, but last 10 years have shown most reference encoder and decoder has already put considerable effort into speed optimisation. And it seems people are already looking to hardware JPEG XL implementation. ( I hope and guess this is for Lossless only )

illiac786

I would agree we will see less improvements that when comparing modern jpeg implementation and the reference one.

When it comes to hardware encoding/decoding, I am not following your point I think. The fact that some are already looking at hardware implementation for JPEG XL means that….?

I just know JPEG hardware acceleration is quite common, hence I am trying to understand how that makes JPEG XL different/better/worse?

ChrisMarshallNY

Looks like it's basically reaffirming what a lot of folks have been doing, unofficially.

For myself, I use PNG only for computer-generated still images. I tend to use good ol' JPEG for photos.

bla3

WebP lossless is close to state of the art and widely available. It's also not widely used. The takeaway seems to be that absolute best performance for lossless compression isn't that important, or at least it won't get you widely adopted.

ProgramMax

WebP maxes at 8-bit per channel. For HDR, you really need 10- or 12-bit.

WebP is amazing. But if I were going to label something "state of the art" I would go with JPEGXL :)

mchusma

I don't know that i have ever used jpg or png lossless in practical usage (e.g. I don't think 99.9% of mobile app or web usecases are for lossless). WebP lossy performance is just not worth it in practice, which is why WebP never took off IMO.

Are there usecases for lossless other than archival?

adzm

Only downside is that webp lossless requires RGB colorspace so you can't, for example, save direct YUV frames from a video losslessly. AVIF lossless does support this though.

yyyk

When it comes to metadata, an implementation not being widely implemented (yet) is not that big a problem. Select tools will do for meta, so this is an advancement for PNG.

klabb3

What about transparency? That’s the main benefit of PNG imo.

cmiller1

Yes JPEG-XL has an alpha channel.

cptcobalt

It seems like this new PNG spec just cements what exists already, great! The best codecs are the ones that work on everything. PNG and JPEG work everywhere, reliably.

Try opening a HEIC or AV1 or something on a machine that doesn't natively support it down to the OS-level, and you're in for a bad time. This stuff needs to work everywhere—in every app, in the OS shell for quick-looking at files, in APIs, on Linux, etc. If a codec does not function at that level, it is not functional for wider use and should not be a default for any platform.

ecshafer

I work with a LOT of images in a lot of image formats, many including extremely niche formats used in specific fields. There is a massive challenge in really supporting all of these, especially when you get down to the fact that some specs are a little looser than others. Even libraries can be very rough, since sure it says on the tin it supports JPG and TIF and HEIC... but does it support a 30GB Jpeg? Does it support all possibly meta data in the file?

lazide

This new spec will make PNG even worse than HEIC or AV1 - you won’t know what codec is actually inside the PNG until you open it.

hulitu

> you won’t know what codec is actually inside the PNG until you open it.

But this is a feature. Think about all those exploits made possible by this feature. Sincerely, the CIA, the MI-6, the FSB, the Mossad, etc.

qwertox

> Officially supports Exif data

Probably the best news here. While you already can write custom data into a header, having Exif is good.

BTW: Does Exif have a magnetometer (rotation) and acceleration (gravity) field? I often wonder about why Google isn't saving this information in the images which the camera app saves. It could help so much with post-processing, like with leveling the horizon or creating panoramas.

Aardwolf

Exif can also cause confusion for how to render the image: should its rotation be applied or not?

Old decoders and new decoders now could render an image with exif rotation differently since it's an optional chunk that can be ignored, and even for new decoders, the spec lists no decoder recommendations for how to use the exif rotation

It does say "It is recommended that unless a decoder has independent knowledge of the validity of the Exif data, the data should be considered to be of historical value only.", so hopefully the rotation will not be used by renderers, but it's only a vague recommendation, there's no strict "don't rotate the image" which would be the only backwards compatible way

With jpeg's exif, there have also been bugs with the rotation being applied twice, e.g. desktop environment and underlying library both doing it independently

DidYaWipe

The stupid thing is that any device with an orientation sensor is still writing images the wrong way and then setting a flag, expecting every viewing application to rotate the image.

The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.

ralferoo

One interesting thing about JPEG is that you can rotate an image with no quality loss. You don't need to convert each 8x8 square to pixels, rotate and convert back, instead you can transform them in the encoded form. So, rotating each 8x8 square is easy, and then rotating the image is just re-ordering the rotated squares.

Someone

> The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.

The hardware likely is optimized for the common case, so I would think that can be a lot slower. It wouldn’t surprise me, for example, if there are image sensors out there that can only be read out in top to bottom, left to right order.

Also, with RAW images and sensors that aren’t rectangular grids, I think that would complicate RAW images parsing. Code for that could have to support up to four different formats, depending on how the sensor is designed,

klabb3

TIL, and hard agree (on face value). I’ve been struck by this with arbitrary rotation of images depending on application, very annoying.

What are the arguments for this? It would seem easier for everyone to rotate and then store exif for the original rotation if necessary.

mavhc

Because your non-smartphone camera doesn't have enough ram/speed to do that I assume (when in burst mode)

If a smartphone camera is doing it, then bad camera app!

andsoitis

There is no standard field to record readouts of a camera's accelerometers or inertial navigation system.

Exif fields: https://exiv2.org/tags.html

bawolff

Personally i wish people just used XMP. Exif is such a bizarre fotmat. Its essentially embedding a tiff image inside a png.

jandrese

Yes, but websites frequently strip all or almost all Exif data from uploaded images because some fields are used by stalkers to track people down to their real address.

johnisgood

And I strip Exif data, too, intentionally, for similar reasons.

bspammer

That makes sense to me for any image you want to share publicly, but for private images having the location and capture time embedded in the image is incredibly useful.

Findecanor

Does the meta-data have support for opting in/out of "AI training"?

And is being able to read an image without an opt-in tag something that has to be explicitly enabled in the reference implementation's API?

rynop

This is a false claim in the PR:

> Many of the programs you use already support the new PNG spec: ... Photoshop, ...

Photoshop does NOT support APNGs. The PR calls out APNg recognition as the 2nd bullet point of "What's new?"

Am I missing something? Seems like a pretty big mistake. I was excited that an art tool with some marketshare finally supported it.

ProgramMax

Phoptoshop supports the HDR part. But you are right, it does not support the APNG part.

qwertfisch

Seems a bit too late? And also, JPEG XL supports all the features and uses already advanced compression (finite-state entropy, like ZStandard). It offers lossy and lossless compression, animated pictures, HDR, EXIF etc.

There is just no need for a PNG update, just adopt JPEG XL.

bmn__

> just

https://caniuse.com/jpegxl

No one can afford to "just". Five years later and it's only one browser! Crazy.

Browser vendors must deliver, only then it's okay to admonish an end user or Web developer to adopt the format.

Aachen

> advanced compression (finite-state entropy, like ZStandard)

I've not tried it on images, but wouldn't zstandard be exceedingly bad at gradients? It completely fails to compress numbers that change at a fixed rate

Bzip2 does that fine, not sure why https://chaos.social/@luc/114531687791022934 The two variables (inner and outer loop) could be two color channels that change at different rates. Real-world data will never be a clean i++ like it is here, but more noise surely isn't going to help the algorithm compared to this clean example

wongarsu

PNG's basic idea is to store the difference between the current pixel and the pixel above it, left of it or to the top-left (chosen once per row), then apply standard deflate compression to that. The first step basically turns gradients into repeating patterns of small numbers, which compress great. You can get decent improvements by just switching deflate for zstd

adgjlsfhk1

the FSE layer isn't responsible for finding these sorts of patterns in an image codec. The domain modeling turns that sort of pattern into repeated data and then the FSE goes to town on the output.

Retr0id

zlib/deflate already has the same issue. It is mitigated somewhat by PNG row filters.

mikae1

> There is just no need for a PNG update, just adopt JPEG XL.

Tell that to Google. They gave up on XL in Chrome[1] and essentially killed its adoption.

[1] https://issues.chromium.org/issues/40168998#comment85

illiac786

I really don’t get it. Why, but why? It’s already confusing as hell, why create yet another standard (variant) with no unique selling point?

pmarreck

JPEG XL is not a "variant", it is a completely new algorithm that is also fully backwards-compatible with every single JPEG already out there, of which there are probably billions at this point.

It also has pretty much every feature desired in an image standard. It is future-proofed.

You can losslessly re-compress a JPEG into a JPEG-XL file and gain space.

It is a worthy successor (while also being vastly superior to) JPEG.

BobaFloutist

Is there any risk that if I open a JPEG-XL in something that knows what a JPEG is but not what a JPEG-XL is and then save it, it'll get lossy compressed? Backwards compatibility is awesome, but I know that if I save/upload/share/copy a PNG, it shouldn't change without explicit edits, right?

dylan604

> You can losslessly re-compress a JPEG into a JPEG-XL file and gain space.

Is that gained space enough to account for the fact you now have 2 files? Sure, you can delete the original jpg on the local system, but are you going to purge your entire set of backups?

illiac786

I was referring to the new PNG, not to JPEG XL.

albert_e

So animated GIFs can be replaced by Animated PNGs with alpha blending with transparent backgrounds and lossless compression! Some nostalgia from 2000s websites can be revived and relived :)

Curious if Animated SVGs are also a thing. I remember seeing some Javascript based SVG animations (it was a animated chatbot avatar) - but not sure if there is any standard framework.

andsoitis

> Curious if Animated SVGs are also a thing.

Yes. Relevant animation elements:

• <set>

• <animate>

• <animateTransform>

• <animateMotion>

See https://www.w3schools.com/graphics/svg_animation.asp

shakna

Slightly related, I recently hit on this SVG animation bug in Chrome (that someone else found):

https://shkspr.mobi/blog/2025/06/an-annoying-svg-animation-b...

albert_e

Oh TIL - Thanks!

This could possibly be used to build full fledged games like pong and breakout :)

jerf

SVG also supports Javascript, which will probably be a lot more useful for games.

mattigames

Overshadowed by CSS animations for almost all use cases.

lawik

But animated gradient outlines on text is the only use-case I care about.

null

[deleted]

qingcharles

Almost nowhere that supports uploading GIFs supports APNG or animated WEBP. The back end support is so low it's close to zero. Which is really frustrating.

extraduder_ire

Do you mean services that reencode gif files to webm/mp4? apng just works everywhere that png works, and will remain animated as long as it's not re-encoded.

You can even have one frame that gets shown if and only if animation is not supported.

riffraff

I was under the impression many gifs these days are actually served as soundless videos, as those basically compress better.

Can animated PNG beat av1 or whatever?

layer8

APNG would be for lossless compression, and probably especially for animations without a constant frame rate. Similar to the original GIF format, with APNG you explicitly specify the duration of each individual frame, and you can also explicitly specify looping. This isn’t for video, it’s more for Flash-style animations, animated logos/icons [0], or UI screen recordings.

[0] like for example these old Windows animations: https://www.randomnoun.com/wp/2013/10/27/windows-shell32-ani...

fc417fc802

All valid points, however AV1 also supports lossless compression and is almost certainly going to win the file size competition against APNG every time.

https://trac.ffmpeg.org/wiki/Encode/AV1#Losslessencoding

bawolff

Its also because people like to "pause" animations, and that is not really an option with apng & gif.

bigfishrunning

why not? that's up to the program displaying the animation, not the animation itself -- i'm sure a pausable gif or apng display program is possible

armada651

> Can animated PNG beat av1 or whatever?

Animated PNGs can't beat GIF nevermind video compression algorithms.

jeroenhd

Once you add more than 256 different colours in total, GIF explodes in terms of file size. It's great for small, compact images with limited colour information, but it can't compete with APNG when the image becomes more detailed than what you'd find on Geocities.

Aissen

> Animated PNGs can't beat GIF nevermind video compression algorithms.

Not entirely true, it depends on what's being displayed, see a few simple tests specifically constructed to show how much better APNG can be vs GIF and {,lossy} webp: http://littlesvr.ca/apng/gif_apng_webp.html

Of course I don't think it generalizes all that well…

josephg

I doubt it, given png is a lossless compression format. For video thats almost never what you want.

DidYaWipe

For animations with lots of regions of solid color it could do very well.

fc417fc802

> many gifs these days are actually served as soundless videos

That's not really true. Some websites lie to you by putting .gif in the address bar but then serving a file of a different type. File extensions are merely a convention and an address isn't a file name to begin with so the browser doesn't care about this attempt at end user deception one way or the other.

faceplanted

You said that's not really true and the described exactly how it's true, what did you mean?

chithanh

When it comes to converting small video snippets to animated graphics, I think WEBP was much better than APNG from the beginning. Only if you use GIF as intermediate format then APNG was competitive.

Nowadays, AVIF serves that purpose best I think.

theqwxas

Some years ago I've used the Lottie (Bodymovin?) library. It worked great and had a nice integration: you compose your animation in Adobe After Effects, export it to an svg plus some json, and the lottie JS script would handle the animation for you. Anything else with (vector, web) animations I've tried is missing the tools or the DX for me to adopt. Curious to hear if there are more things like this.

I'm not sure about the tools and DX around animated PNGs. Is that a thing?

bmacho

> Curious if Animated SVGs are also a thing.

SVG is just html5, it has full support for CSS, javascript with buttons, web workers, arbitrary fetch requests, and so on (obviously not supported by image viewers or allowed by browsers).

bawolff

Browsers support all that sort of thing, as long as you use an iframe. (Technically there are sone subtle differences between that and html5, but you are right its mostly the same)

If you use an <img> tag, svgs are loaded in "restricted" mode. This disables scripting and external resources. However animation via either SMIL or CSS is still supported.

vorgol

It nearly got raw socket support back in the day: https://news.ycombinator.com/item?id=35381755

jonhohle

It seems crazy to think about, but I interviewed with a power company in 2003 that was building a web app with animated SVGs.

jokoon

both GIF and PNG use zipping for compressing data, so APNG are not much better than GIF

Calzifer

(A)PNG supports semi-transparency. In GIF a pixel is either full transparent or full opaque.

Also while true color gifs seem to be possible it is usually limited to 256 colors per image.

For those reasons alone APNG is much better than GIF.

bawolff

PNG uses deflate (same as zip) but GIF uses LZW. These are different algorithms. You should expect different compression results i would assume.

0points

Remember when we unwillingly trained the generative AI:s of our time with an endless torrent of factoids?

ggm

Somebody needs to manage human time/date approximates in a way other people in s/w will align to.

"photo scanned in 2025, is about something in easter, before 1940 and after 1920"

luguenth

In EXIF, you have DateTimeDigitized [0]

For ambiguous dates there is the EDTF Spec[1] which would be nice to see more widely adopted.

[0] https://www.media.mit.edu/pia/Research/deepview/exif.html

[1] https://www.loc.gov/standards/datetime/

ggm

I remember reading about this in a web forum mainly for dublin core fanatics. Metadata is fascinating.

Different software reacts in different ways to partial specifications of yyyy/mm/dd such that you can try some of the cute tricks but probably only one s.w. package honours it.

And the majors ignore almost all fields other than a core set of one or two, disagree about their semantics, and also do wierd stuff with file name and atime/mtime.

SchemaLoad

The issue that gets me is that Google Photos and Apple photos will let you manually pick a date, but they won't actually set it in the photo EXIF, so when you move platforms. All of the images that came from scans/sent without EXIF lose their dates.

ggm

It's in sidecar files. Takeout gets them, some tools read them.

kccqzy

But there is no standardization of sidecar files, no? Whereas EXIF is pretty standard.

mbirth

IIRC osxphotos has an option to merge external metadata into the exported file.

369548684892826

A fun fact about PNG, the correct pronunciation is defined in the specification

> PNG is pronounced “ping”

See the end of Section 1 [0]

0: https://www.w3.org/TR/REC-png.pdf

gred

That makes two image format names which I will refuse to pronounce correctly (the other being GIF [1]).

[1] https://edition.cnn.com/2013/05/22/tech/web/pronounce-gif

ziml77

The only logic I ever hear for using a hard G is because that's how Graphics is said. Yet I never hear people saying jay-feg.

gred

Also "gift".

cmiller1

How do you pronounce PNG?

gred

Pee En Gee

illiac786

P&G, stands for Pee & Gloat.

kristopolous

I used to call them Nogs claiming the P was silent.

People believed me. Still funny.

NoMoreNicksLeft

"Pong". Hate me, I don't care.

ProgramMax

Even though I know about this, I still pronounce it as letters. :)

eviks

Ha, been doing it "wrong" my whole life!

dspillett

Because the creator of gifs telling the world how he pronounced it made such a huge difference :)

Not sure I'll bother to reprogram myself from “png”, “pung”, or “pee-enn-gee”.

naikrovek

When someone makes a baby, you call that person by their real name with the correct pronunciation, don’t you?

So why can’t you do that with GIF or PNG? People that create things get to name them.

AllegedAlec

> People that create things get to name them.

And if they pick something dumb enough other people get to ignore them.

pixl97

Depends...

You'll commonly call someone by their pronounced name out of respect, forced or given.

In a situation where someone does something really stupid or annoying and the forced respect isn't there, most people don't.

eviks

First, it's not a baby, that's a ridiculous comparison.

But also, no, not universally even for babies, especially when the name is something ridiculous like X Æ A-Xii where even parents disagree on pronunciation, or when the person himself uses a "non-specced" variant

airstrike

Because PNGs won't answer back when I call them by some "correct" name.

freeopinion

A parent may name their baby Elizabeth. Then even the parent might call them Liz or Beth or Betsy or Bit or Bee.

LocalH

I've said "jif" for almost 40 years, and I'm not stopping anytime soon.

Hard-g is wrong, and those who use it are showing they have zero respect for others when they don't have to.

It's the tech equivalent to the shopping cart problem. What do you do when there is no incentive one way or the other? Do you do the right thing, or do you disrespect others?

pwdisswordfishz

Linguistic prescriptivism is wrong, and people who promote it are showing they have zero respect for others when they don't have to.

bigfishrunning

pronounce the jraphics interchange format any way you want, everyone knows what you're talking about anyway -- try not to get so worked up. It's not the shopping cart problem, because no-one is measurably harmed by not choosing the same pronunciation as you.

npteljes

As much as I hate jif, thinking about it, "GPU" works the same - we say gee-pee-you and not gh-pee-you. Garbage Collection is also gee-cee. So it's only logical that jif is the correct one - even if it's not the widely accepted one.

Wrt/ communication, aside from personal preference, one can either respect the creator, or the audience. If I stand in front of 10 colleagues, 10 out of them would not understand jif, or would only get it because this issue has some history now. gif on the other hand has no friction.

Ghengis Khan for example sounds very different from its original Mongolian pronunciation. And there is a myriad others as well.

i80and

Is this a bit?

yuters

Pronouncing it like that would invite confusion as the word ping is often used in messaging.

LegionMammal978

Reading the linked blog post on the new cICP chunk type [0], it looks like the "proper HDR support" isn't something that you couldn't already do with an embedded ICC profile, but instead a much-abbreviated form of the colorspace information suitable for small image files.

[0] https://svgees.us/blog/cICP.html

ProgramMax

PNG previously supported ICC v2. That was updated to ICC v4. However, neither of these are capable of HDR.

Maybe iccMAX supports HDR. I'm not sure. In either case, that isn't what PNG supported.

So something new was required for HDR.

cormorant

"common but not representable RGB spaces like Adobe 1998 RGB or ProPhoto RGB cannot use CICP and have to be identified with ICC profiles instead."

cICP is 16 bytes for identifying one out of a "list of known spaces" but they chose not to include a couple of the most common ones. Off to a great start...

I wonder if it's some kind of legal issue with Adobe. That would also explain why EXIF / DCF refer to Adobe RGB only by the euphemism "optional color space" or "option file". [1]

[1] https://en.wikipedia.org/wiki/Design_rule_for_Camera_File_sy...

razorfen

Can anyone explain how they maintain backwards compatibility on formats like this when adding features? I assume there are byte ranges managed in the format, but with things like compression, wouldn’t compressed images be unrenderable on clients that don’t support it? I suppose it would behoove servers to serve based on what the client would support.

gmueckl

In mynunderstanding, the actual image data encoding isn't altered in this update. It only introduces an extended color space definition for the encoded data.

PNG is a highly structured file format internally. It borrows design ideas from formats like EA's Interchange File Format in that it contains lists of chunks with fixed headers encoding chunk type amd length. Decoders are expected to parse them and ignore chunk types they do not support.

joquarky

The Amiga was quite a platform. Glad to know that it had some long term influence.

joshmarinacci

The PNG format has chunks with types. So you can add an additional chunk with a new type and existing decoders will ignore it.

There is also some leeway for how encoding is done as long as you end up with a valid stream of bits at the end (called the bit stream format), so encoders can improve over time. This is common in video formats. I don’t know if a lossless image format would benefit much from that.

gmueckl

PNG is a bit unusual in that it allows a couple of alternate compressed encodings for the data that are all lossless. It is up to the encoder to choose between them (scanline by scanline, IIRC). So.this encoding algorithm leeway is implicit in a way.

jdhsddh

PNG is specifically designed to support this. Clients will simply skip chunks they do not understand.

In this case there could be an embedded reduced colour space image next to an extended color space one

LeoPanthera

> I know you all immediately wondered, better compression?. We're already working on that.

This worries me. Because presumably, changing the compression algorithm will break backwards compatibility, which means we'll start to see "png" files that aren't actually png files.

It'll be like USB-C but for images.

lifthrasiir

Better compression can also mean a new set of filter methods or a new interlacing algorithm. But yeah, any of them would cause an instant incompatibility. As noted in the relevant issue [1], we will need a new media type at the very least.

[1] https://github.com/w3c/png/issues/39#issuecomment-2674690324

Arnt

We would need a new media type. But the actual new features don't need one, because the news don't break compatibility.

https://svgees.us/blog/img/revoy-cICP-bt.2020.png uses the new colour space. If your software and monitor can handle it, you see better colour than I, otherwise, you see what I see.

snvzz

I am hopeful whatever better compression doesn't end up multiplying memory requirements, or increase burden on cpu, especially on decompression.

Now, PNG datatype for AmigaOS will need upgrading.

Arnt

I don't see why? If your video output is plain old RGB (like the Amiga hardware), then an unmodified decoder will handle new files without a problem. You only need a new decoder if your video output can handle more vivid colours than RGB can express.

Lerc

It has fields to say what compression is used. Adding another compression form should be handled by existing software as recognizing it as a valid PNG that they can't decompress.

The PNG format is specifically designed to allow software to read the parts they can understand and to leave the parts they cannot. Having an extensible format and electing never to extend it seems pointless.

koito17

> Having an extensible format and electing never to extend it seems pointless.

This proves OP analogy regarding USB-C. Having PNG as some generic container for lossless bitmap compression means fragmentation in libraries, hardware support, etc. The reason being that if the container starts to support too many formats, implementations will start restricting to only the subsets the implementers care about.

For instance, almost nobody fully implements MPEG-4 Part 3; the standard includes dozens of distinct codecs. Most software only targets a few profiles of AAC (specifically, the LC and HE profiles), and MPEG-1 Layer 3 audio. Next to no software bothers with e.g. ALS, TwinVQ, or anything else in the specification. Even libavcodec, if I recall correctly, does not implement encoders for MPEG-4 Part 3 formats like TwinVQ. GP's fear is exactly this -- that PNG ends up as a standard too large to fully implement and people have to manually check which subsets are implemented (or used at all).

cm2187

But where the analogy with USB-C is very good is that just like USB-C, there is no way for a user to tell from the look of the port or the file extension what the capabilities are. Which even for a fairly tech savvy user like me is frustrating. I have a bunch of cables, some purchased years ago, how do I know what is fit for what?

And now think of the younger generation that has grown up with smartphones and have been trained to not even know what a file is. I remember this story about senior high school students failing their school tests during covid because the school software didn't support heif files and they were changing the file extension to jpg to attempt to convert them.

I have no trust the software ecosystem will adapt. For instance the standard libraries of the .net framework are fossilised in the world of multimedia as of 2008-ish. Don't believe heif is even supported to this day. So that's a whole bunch of code which, unless the developers create workarounds, will never support a newer png format.

bayindirh

JPEG is no different. Only the decoder is specified. As long as the decoder decodes what you give it to the image you wanted to see, you can implement anything. This is how imgoptim/squash/aerate/dietJPG works. By (ab)using this flexibility.

Same is also true for the most advanced codecs. MPEG-* family and MP3 comes to my mind.

Nothing stops PNG from defining a "set of decoders", and let implementers loose on that spec to develop encoders which generate valid files. Then developers can go to town with their creativity.

fc417fc802

I honestly don't see an issue with the mpeg-4 example.

Regarding the potential for fragmentation of the png ecosystem the alternative is a new file format which has all the same support issues. Every time you author something you make a choice between legacy support and using new features.

From a developer perspective, adding support for a new compression type is likely to be much easier than implementing logic for an entirely new format. It's also less surface area for bugs. In terms of libraries, support added to a dependency propagates to all consumers with zero additional effort. Meanwhile adding a new library for a new format is linear effort with respect to the number of programs.

7bit

I never once in 25 years encountered an issue with an mp4 Container that could Not be solved by installing either the divx or xvid codec. And I extensively used mp4's metatdat for music, even with esoteric Tags.

Not Sure what youre talking abouz.

mort96

> Adding another compression form should be handled by existing software as recognizing it as a valid PNG that they can't decompress.

Yeah, we know. That's terrible.

shiomiru

The difference between valid PNG you can't decompress and invalid PNG is fairly irrelevant when your aim is to get an image onto the screen.

And considering we already have plenty of more advanced competing lossless formats, I really don't see why "feed a BMP to deflate" needs a new, incompatible spin in 2025.

Arnt

It's a new and compatible spin. https://svgees.us/blog/img/revoy-cICP-bt.2020.png uses the important new feature and your old software can display it.

More generally, PNG has a simple feature to specify what's needed. A file consists of a number of chunks, and one bit in the chunk specifies whether that chunk is required for display. All of the extensions I've seen in the past decades set that bit to "optional".

For example, this update includes a chunk containing EXIF data. As you'd expect, the exif chunk sets that bit to "optional".

fc417fc802

> plenty of more advanced competing lossless formats

Other than JXL which still has somewhat spotty support in older software? TIFF comes to mind but AFAIK its size tends to be worse than PNG. Edit: Oh right OpenEXR as well. How widespread is support for that in common end user image viewer software though?

pvorb

Extending the format just because you can – and breaking backwards compatibility along the way – is even more pointless.

If you've created an extensible file format, but you never need to extend it, you've done everything right, I'd say.

jajko

What about an extensible format that would have as part of header an algorithm (in some recognized DSL) of how to decompress it (or any other step required for image manipulation)? I know its not so much about PNG but some future format.

That's what I would call really extensible, but then there may be no limits and hacking/viruses could have easily a field day.

HelloNurse

Extensibility of PNG has been amply used, as intended, for proprietary chunks that hold application specific data (e.g. PICO-8 games) without bothering other software.

Lerc

Doesn't pico-8 store the data in the least significant bits of colour? Maybe it got updated to use chunks.

chithanh

> Adding another compression form should be handled by existing software

In an ideal world, yes. In practice however, if some field doesn't change often, then software will start to assume that it never changes, and break when it does.

TLS has learned this the hard way when they discovered that huge numbers of existing web servers have TLS version intolerance. So now TLS 1.2 is forever enshrined in the ClientHello.

dooglius

> Having an extensible format and electing never to extend it seems pointless.

So then it was pointless for PNG to be extensible? Not sure what your argument is.

jillesvangurp

Old PNGs will work just fine. And forward compatibility is much less important.

The main use case for PNG is web browsers and all of them seem to be on board. Using old web browsers is a bad idea. You do get these relics showing up using some old version of internet explorer. But some images not rendering is the least of their problems. The main challenge is actually going to be updating graphics tools to export the new files. And teaching people that sRGB maybe isn't good enough any more. That's going to be hard since most people have no clue about color spaces.

Anyway, that gives everybody plenty of time to upgrade. By the time this stuff is widely used, it will be widely supported. So, you kind of get forward compatibility that way. Your browser already supports the new format. Your image editor probably doesn't.

hnlmorg

Browsers aren't the only software that work with PNGs. Far from it in fact.

whywhywhywhy

> The main use case for PNG is web browsers

It's not, most images you encounter on the web need better compression.

The main PNG use case is to store lossless images locally as master copies that are then compressed or in workflows where you intend to edit and change them where compressed formats would degrade the more they were edited.

null

[deleted]

AlienRobot

>The main use case for PNG is web browsers

This is news to me. I'm pretty sure the main use case for PNG is lossless transparent graphics.

asadotzler

Depends on whose use cases you're considering.

There are about 3.6 billion people surfing the web and experiencing PNGs. That use case, consuming PNGs, seems to dwarf the perhaps 100 million (somewhat wild guess) graphic designers, web developers, and photo editing professionals who manipulate images for publishing (in any medium) or archiving.

If, on the other hand, you're considering the use cases envisioned by PNG's creators, or the use cases that interest the people processing or publishing images, yes, these people are focused on format itself and its capabilities.

I suspect this particular use of "use case" isn't terribly clear. Also these two considerations are not incompatible.

ProgramMax

Worry not! (Well, worry a little.)

The first bit of our research is "What can we already make use of which requires no spec update? There are plenty of PNG optimizers. How much of that should go into the typical PNG libraries?"

Same with parallel encoding & decoding. An older image viewer will be able to decode it on one thread without ever knowing parallel decoding was an option.

Here's the worry-a-little part: Everybody immediately jumps to file size as to what image compression is better or worse. That isn't the best take, but it is what it is. So there is pressure to adopt newer technologies.

We often do have a way to maintain some degree of backwards compatibility even when we do this. For example, we can store a downsampled image for old viewers. Then extra, new chunks will know "Mix that with this full scale data, using a different compression".

As you can imagine, this mixing complicates things. It might not be the best option. Sooooo we're researching it :)

skywal_l

Can't you improve a compression algorithm and still produce a still valid decompression input? PNG is based on zip, there's certainly ways to improve zip without breaking backwards compatibility.

That being said, they also can do dumb things however, right at the end of the sentence you quote they say:

> we want to make sure we do it right.

So there's hope.

masklinn

> Can't you improve a compression algorithm and still produce a still valid decompression input? PNG is based on zip, there's certainly ways to improve zip without breaking backwards compatibility.

That's just changing an implementation detail of the encoder, and you don't need spec changes for that e.g. there are PNG compressors which support zopfli for extra gains on the DEFLATE (at a non-insignificant cost). This is transparent to the client as the output is still just a DEFLATE stream.

vhcr

That's what OptiPNG already does.

josefx

Doesn't OptiPNG just brute force various settings and pick the best result?

colanderman

One could imagine a PNG file which contains a low-resolution version of the image with a traditional compression algorithm, and encodes additional higher-resolution detail using a new compression algorithm.

mrheosuper

Does usb-c spec break backward compatibility ?, a 2018 macbook work perfectly fine with 2025 usb c charger

danielheath

Some things don't work unless you use the right kind of USB-C cable.

EG your GPU and monitor both have a USB-C port. Plug them together with the right USB cable and you'll get images displayed. Plug them together with the wrong USB cable and you won't.

USB 3 didn't have this issue - every cable worked with every port.

mrheosuper

That is not backward compatible problem. If a cable that does 100w charging when using pd2.0, but only 60w when using with pd3.1 device, then i would agree with you.

mystifyingpoi

Yeah, I also don't think they've broken backwards compat ever. Super high end charger from 2024 can charge old equipment from 2014 just fine with regular 5V.

What was broken was the promise of a "single cable to rule them all", partly due to manufacturers ignoring the requirements of USB-C (missing resistors or PD chips to negotiate voltages, requiring workarounds with A-to-C adapters), and a myriad of optional stuff, that might be supported or not, without a clear way to indicate it.

techpression

I don’t know if it’s the spec or just a plethora of vendors that ignores it, but I have many things with a USB-C port that requires USB-A as source. USB-C to A to C works, yay dongles, but not just C to C. So maybe it’s not really breaking backwards compatibility, just a weird mix of a port and the communication being separate standards.

mrheosuper

because those usb-c ports do not follow the spec. If they had followed the spec from 1st day there would be no problem even now.

fragmede

it's vendors just changing the physical port but not updating the electronics. specifically, a 5.1kΩ pull-up resistors on the CC1 and/or CC pins is needed on the host (was usb-a) side in order for the c to c cable to work.

zirgs

Yeah - it's a mess. Some devices only charge with a charger that supports PD. Some other devices need a charger WITHOUT PD support.

null

[deleted]

altairprime

They could, for example, use lossy compression for the compatibility layer and then fill it in the rest of the way to lossless using incompatible new compression objects. Legacy uses will see some fidelity degradation, but they are already being stuck with sRGB downmixes, so that’s fine — and those who are bothered by it can just emit a lossless-pixels (but lossy-color and lossy-range) compatibility layer and reserve the compression benefits for the color and dynamic range.

I’m not saying this is what will happen — but if I was able to construct a plausible approach to compression in ten minutes, then perhaps it’s a bit early to predict the doom of compatibility.