FLAC 1.5 Delivers Multi-Threaded Encoding
53 comments
·February 11, 2025jprjr_
The thing I'm excited about is decoding chained Ogg FLAC files.
Some software wouldn't work correctly with FLAC-based Icecast streams if they used libFLAC/libFLAC++ for demuxing and decoding. Usually these streams mux into Ogg and send updated metadata by closing out the previous Ogg bitstream and starting a new one. If you were using libFLAC to demux and decode - when the stream updated, it would just hang forever. Apps would have to do their own Ogg demuxing and reset the decoder between streams.
Chained Ogg FLAC allows having lossless internet radio streams with rich, in-band metadata instead of relying on out-of-band methods. So you could have in-band album art, artist info, links - anything you can cram into a Vorbis comment block.
masklinn
That’s nice although probably not of much use to most people: iirc FLAC encoding was 60+x realtime on modern machines already so unless you need to transcode your entire library (which you could do in parallel anyway) odds are you spend more time on setting up the encoding than actually running it.
Lockal
It could be useful for audio editors like here: https://manual.audacityteam.org/man/undo_redo_and_history.ht... - many steps require full save of tracks (potentially dozens of them). It is possible to compress history retrospectively, but why, if we can be done in parallel?
CyberDildonics
If you have multiple tracks you would just put different tracks on different threads anyway and parallelization is trivial.
diggan
> That’s nice although probably not of much use to most people
Doesn't that depend on the hardware of "most people"? Even if you have a shit CPU, you probably have at least more than 1 core, so this will be at least little bit helpful for those people, wouldn't it?
Edit: Just tried turning a ~5 minute .wav into .flac (without multithreading) on a Intel Celeron N4505 (worst CPU I have running atm) and took around 20 seconds, FWIW
_flux
But even a smaller number of people have individual very long raw audio files.
I've converted a bunch of sound packs (for music production) to flac and it really takes next to no time at all. I suppose those are quite short audio files, but there's a lot of them, 20 gigabytes in total (in flac).
Perhaps the person who wrote this improvement did have a use case for it, though :).
johncolanduoni
In most situations you’d be encoding more than one song at a time, which would already parallelize enough unless you had a monster cpu and only one album.
diggan
I dunno, when I export a song I'm working on, it's just that one song. I think there are more use cases for .flac than just converting a big existing music collection.
stonemetal12
FLAC is more than 20 years old at this point.
At least according to wikipedia it doesn't look like they haven't changed the algorithm to much in the mean time, so just about anything should be able to run it today.
masklinn
> this will be at least little bit helpful for those people, wouldn't it
Probably not, because they're unlikely to have enough stuff to export that it's relevant.
> Edit: Just tried turning a ~5 minute .wav into .flac (without multithreading) on a Intel Celeron N4505 (worst CPU I have running atm) and took around 20 seconds, FWIW
Which is basically nothing. It takes more time than that to fix the track's tagging.
diggan
I mean, you're again assuming the only use case is "encode and tag a huge music collection", encoding is used for more than that.
For example, I have a raspberry pi that is responsible for recording as soon as it powers up. Then I can press a button, and it grabs the last 60 recorded minutes, which happens to be saved as .wav right now, which I'm fine with, the rest of my pipeline works with .wav.
But if my pipeline required flac, I would need to turn the wav into flac on the raspberry pi at this point, for 60 minutes of audio, and of course I'd like that to be faster if possible, so I can start editing it right away.
dijital
For folks working in bioacoustics I think it might be pretty relevant. I'm working on a project with large batches of high fidelity, ultrasonic bioacoustic recordings that need to be in WAV format for species analysis but, at the data sizes involved, FLAC is a good archive format (~60% smaller).
This release will probably be worth a look to speed the archiving/restoring jobs up.
dale_glass
I have a possible use for FLAC for realtime audio.
We (Overte, an open source VR environment) have a need for fast audio encoding, and sometimes audio encoding CPU time is the main bottleneck. For this purpose we support multiple codecs, and FLAC is actually of interest because it turns out that the niche of "compressing audio really fast but still in good quality" is a rare one.
We maingly use Opus which is great, but it's fairly CPU heavy, so there can be cases when one might want to sacrifice some bandwidth in exchange for less CPU time.
flounder3
Was about to say the same thing. It was already blazingly fast, with a typical album only taking seconds.
2OEH8eoCRo0
Even still, it was no issue saturating all CPU cores since each core could transcode a track at a time.
jiehong
Interestingly, FLAC is now published as RFC 9639 [0].
macawfish
Will this translate to low latency FLAC streaming?
jprjr_
For FLAC, latency is primarily driven by block size and the library's own internal buffering.
Using the API you can set the blocksize, but there's no manual flush function. The only way to flush output is to call the "finish" function, which as its name implies - marks the stream as finished.
I actually wrote my own FLAC encoder for this exact reason - https://github.com/jprjr/tflac - focused on latency over efficient compression.
ksec
People looking for low latency lossless streaming may want to take a look at WavPack.
vodou
Probably not. It only mentions multi-threaded encoding. Not decoding. But for streaming it shouldn't matter a lot since you only decode smaller chunks at a time. Latency should be good. At least that is my experience and 95% of my music listening is listening to FLAC files.
shawabawa3
probably not as FLAC is basically only useful for archival purposes
for streaming you are better off with an optimised lossy codec
iamacyborg
I can understand why a big streaming provider might want to use a lossy codec from a bandwidth cost perspective but what about in the context of streaming in your own network (eg through Roon or similar)?
masklinn
Why would you transcode to FLAC when streaming? And transcode from what?
theandrewbailey
> for streaming you are better off with an optimised lossy codec
If you are Spotify, that probably makes sense. But if you are someone with a homelab, you probably have enough bandwidth and then some, so streaming FLAC to a home theater (your own or your friend's) makes sense.
PaulDavisThe1st
It is what (originally) SlimDevices (now Logitech Media Server) does, for example.
Night_Thastus
Audio files are tiny, itty bitty things - even uncompressed. If you have the ability to use a lossless file at 0 extra cost...why not? Massive streaming services like Spotify don't obviously, the economics are way different.
she46BiOmUerPVj
I have a flac collection that I was streaming, and I ended up writing some software to encode the entire library to opus because when you are driving around you never know how good your bandwidth will be. Since moving to opus I never have my music cut off anymore. Even with the nice stereo in my car I don't notice any quality problems. There are definitely reasons to not stream wav or flac around all the time.
pimeys
Tell that to some of my 24bit/192kHz flac files. About 300 megabytes each. Not nice to stream with plexamp using my 40 Mbps upstream... Easy to encode in opus though.
epcoa
The major streaming platforms except Spotify offer lossless streaming as an upgrade or benefit(Apple Music) for years and even Spotify the hold out is releasing “Super Premium” soon. Opinion aside, lossless streaming is a big deal.
timcobb
why is that?
shawabawa3
the human ear just isn't good enough at processing sound to need lossless codecs
a modern audio codec at 320kbps bitrate is more than good enough.
Lossless is useful for recompressing stuff when new codecs come out or for different purposes without introducing artefacts, not really for listening (in before angry audiophiles attack me)
TacticalCoder
[dead]
On Windows (so libwinpthread), 8C/16T machine: