Skip to content(if available)orjump to list(if available)

VLC tops 6B downloads, previews AI-generated subtitles

Semaphor

A while ago I used whisper (or rather an OSS subtitle tool that used whisper, sadly can’t remember the name; it also converted burned in subs to proper ones via OCR) to generate subtitles for a season of a Show (4 season DVD set, 1 had proper subs, 2 burned in subs, 1 no subs -.-), too old and not popular enough to have "scene" subs, it worked impressively well. The most memorable thing for me was that a character’s scream was properly attributed to the correct character.

I’d love a feature like that for Jellyfin eventually.

stzsch

You can setup a bazarr instance to use whisper as a subtitle provider.

Semaphor

Looks like it only works if you use the *arr stack, which I don’t.

diggan

If you're already using Jellyfin, then why not? Don't want to complicate the stack?

jl6

There's an art to subtitling that goes beyond mere speech-to-text processing. Sometimes it's better to paraphrase dialog to reduce the amount of text that needs to be read. Sometimes you need to name a voice as unknown, to avoid spoilers. Sometimes the positioning on the screen matters. I hope the model can be made to understand all this.

diggan

> Sometimes it's better to paraphrase dialog to reduce the amount of text that needs to be read

Please no. Some subtitle companies do think like this, and it's really weird, like when they try to "convert" cultural jokes, and then add in a bunch of more assumptions regarding what cultures you're aware of depending on the subtitle language, making it even harder to understand...

Just because I want my subtitles in English, doesn't mean I want all typical discussed Spanish food names to be replaced by "kind of the same" British names, yet something like that is something I've come across before. Horrible.

flippyhead

I totally get this. When I'm watching videos for the purpose of learning a language, I want all the actual words in the subtitles. But if I'm watching just ot enjoy, say in a language I don't care to learn, I don't mind someone creatively changing the dialog to how it probably would have been written in English. This happens with translations of novels all the time. People even seek out specific translators who they feel are especially talented at this kind of thing.

bdndndndbve

It depends on the context! Trying to Americanize Godzilla, for instance, has largely failed because Godzilla is an allegory for the unique horror of nuclear bombing which Japan experienced. Making him just a lizard that walks through New York is kind of stupid.

Jokes are an example of something translators can do really well - things like puns don't work 1:1 across languages. A good translator will find a corresponding, appropriate line of dialogue and basically keep the intent without literally translating the words.

Food is kind of silly because it's tied to place - if a setting is clearly Spanish, or a character is Spanish, why wouldn't they talk about Spanish food? Their nationality ostensibly informs something about their character (like Godzilla) and can't just be fine/replaced.

diggan

> Jokes are an example of something translators can do really well - things like puns don't work 1:1 across languages. A good translator will find a corresponding, appropriate line of dialogue and basically keep the intent without literally translating the words.

Again, those aren't "cultural translations" but "idioms translations", which I do agree should be translated into something understandable in the language, otherwise you wouldn't understand it.

What I was aiming at in my original comment was examples like these:

> Family Guy original voice-overs + subtitles making a joke about some typical father figure in Hollywood for example. Then the Latin Spanish subtitles will have translated that joke but replaced the specific actor with some typical father figure from the Mexican movie industry, while the Castilian subtitles would have replaced it with someone from the Spanish movie industry.

lifthrasiir

More precisely speaking, there are two kinds of subtly different subtitles with different audiences: those with auditory imparements and those with less understanding of given language. The former will benefit from paraphrasing while the latter will be actively disadvantaged due to the mismatch.

xattt

There was an eminent Russian voiceover artist (goblin?) that translated pirated Western movies with his own interpretation.

His translations were nowhere near what the movie was about, but they were hilarious and fit the plot perfectly.

naoru

Not all of his translations were nowhere near the original. For example, his translation of Guy Ritchie's Snatch was excellent (in my opinion of course) and is still quoted to this day. I'd say it's the only one that absolutely nails it and then some.

On the other hand, his Lord of The Rings was an "alternative" dub as you described. Didn't watch that one though.

scarface_74

I know a little Spanish and even I get annoyed when the English subtitles don’t match what they said in Spanish. Of course I expect grammatically correct Spanish to be translated into grammatically correct English.

close04

> Spanish food names to be replaced by "kind of the same" British names

The purpose of a translation is after all to convey the meaning of what was said. So for example you'd want the English "so so" to be translated in Spanish as "más o menos" instead of repeating the translation of "so" twice. You don't want to just translate word for word, venir infierno o alta agua.

A lot of dialog needs language specific context, many expressions don't lend themselves to literal translation, or the translation in that language is long and cumbersome so paraphrasing is an improvement.

Like with anything else, the secret is using it sparingly, only when it adds value.

diggan

> But for example you'd want the English "so so" to be translated in Spanish as "más o menos" instead of doubling down on whatever literal translation for "so" they choose.

Agree, but I don't think those are "cultural translations" but more like "idioms translations", which mostly makes sense to do.

What I originally wrote about are things like Family Guy original voice-overs + subtitles making a joke about some typical father figure in Hollywood for example. Then the Latin Spanish subtitles will have translated that joke but replaced the specific actor with some typical father figure from the Mexican movie industry, while the Castilian subtitles would have replaced it with someone from the Spanish movie industry.

techjamie

That's a good example of translation where there's only really so many ways to do it. A bad example like people are talking about is the original 4Kids Pokemon where every time someone brought out an Onigiri (rice ball), they would call them jelly donuts.

llm_nerd

There is the art of subtitling, and then there is the technical reality that sometimes you have some content with no subtitles and just want a solution now, but the content didn't come with an SRT or better yet VTT and OpenSubtitles has no match.

They're using Whisper for speech to text, and some other small model for basic translation where necessary. It will not do speaker identification (diarization), and certainly isn't going to probe into narrative plot points to figure out if naming a character is a reveal. It isn't going to place text on the screen according to the speaker's frame place, nor for least intrusion. It's just going to have a fixed area where a best effort at speech to text is performed, as a last resort where the alternative is nothing.

Obviously it would be preferred to have carefully crafted subtitles from the content creator, translating if the desired language isn't available but still using all the cues and positions. Secondly to have some carefully crafted community subtitles from opensubtitles or the like, maybe where someone used "AI" and then hand positioned/corrected/updated. Failing all that, you fall to this.

eviks

> better to paraphrase dialog to reduce the amount of text that needs to be read.

That's just bad destructive art, especially for a foreign language that you partially know.

> Sometimes you need to name a voice as unknown, to avoid spoilers.

Don't name any, that's what your own eye-ear voice recognition/matching and positioning are for (also reduces the amount of text)

> Sometimes the positioning on the screen matters.

This is rather valuable art indeed! Though unlikely fit to be modelled well

lxgr

> Don't name any, that's what your own eye-ear voice recognition/matching and positioning are for

That’s tricky when one or more speakers aren’t visible.

eviks

They'd also have to start speaking at the same time and have similar voices to make it tricky

MindSpunk

Subtitles aren't just for foreign viewers though, they're also for native speakers who are now hearing impaired.

eviks

Sure, that's what HI version is for

entropie

> Sometimes it's better to paraphrase dialog to reduce the amount of text that needs to be read.

Questionable. It drives me crazy to have subtitles that are paraphrase in a way that changes the meaning of statements.

Aurornis

AI subtitles are just text representation of the sound track.

There is no need for artistic interpretation, substituting words, or hiding information. If it’s in the audio, there’s no reason to keep it out of the subtitle.

An AI subtitle generator that takes artistic license with the conversion is not what anyone wants.

Hard_Space

That doesn't work for idioms, certainly in Italian, which has multiple colorful metaphors which would be mystifying if translated directly.

saint_yossarian

I don't think anybody's talking about translations.

raincole

> Sometimes it's better to paraphrase dialog to reduce the amount of text that needs to be read

I really hope people to stop doing that.

mohamez

This is horrible for people who learn languages using TV Shows and Movies. One of the most frustrating things I've encountered while learning German is the "paraphrase" thing, it makes practicing listening very hard, because my purpose wasn't to understand what was being said, but rather familiarizing my ear with spoken German.

So, knowing exactly the words being said is of utter importance.

thiht

> Sometimes it's better to paraphrase dialog to reduce the amount of text that needs to be read

NO!

I speak and understand 90% of English but I still use subtitles because sometimes I don't understand a word, or the sound sucks, or the actor thought speaking in a very low voice was a good idea. When the subtitles don't match what's being said, it's a terrible experience.

kace91

I recently used some subtitles that I later found out had been AI generated.

The experience wasn't really good to be honest - the text was technically correct but the way multiline phrases were split made them somehow extremely hard to parse. The only reason I learned AI was involved is that it was bad enough for me to stop viewing and check what was wrong.

Hopefully it's an implementation detail and we'll see better options in the future, as finding subtitles for foreign shows is always a pain.

soco

This reminds me of Prime Video subtitles. Anything not Hollywood blockbuster will only have one language (from what it looks like, randomly chosen) of garbage quality (not sure whether AI generated though). But there's worse anyway - some Asian titles ONLY available in badly dubbed versions - again in some random language (hello Casshern in... German???). So I see this VLC initiative as an improvement from this very very low bar.

adinisom

On Youtube I really like the auto-generated captions and often prefer them to the creator's because:

- Sometimes the creator bases their captions on the script and misses changes in edit

- Sometimes the creator's captions are perfect transcriptions but broken up and timed awkwardly

Auto-generated captions aren't always perfect but unlike human captions provide word-by-word timing.

PoignardAzur

Yeah, I definitely wish other media would experiment with Youtube-style word-at-a-time subtitles. They often feel a lot more natural than full-sentence subtitles, the way they stream in is better at providing "connecting tissue", they never spoil upcoming reveals the way subtitles tend to, etc.

(By "connecting tissue", I mean they don't have the problem where sentence A is "I like chocolate", sentence B is "only on special occasions", and at the time B appears A is completely gone, but you really need A and B to be onscreen at the same time to parse the full meaning intuitively.)

Dr4kn

Proper subtitles are obviously better, but it's impossible to do on everything. The tech is going to get better, and is already a game changer for hearing impaired people. Subtitles that are mostly correct are much better than none at all.

VLC has the option to find subtitles. If you use plex or jellyfin there are add-ons or bazarr, which does it automatically

shinycode

I had exactly the same experience. Human expertise in subtitle makes the experience really better

esperent

If you have the luxury of requiring subtitles in English, sure. There's a huge scene of people making them and high quality subtitles available for pretty much everything. If you need subs in another language though your experience might change dramatically. Especially for any media that is old or less popular, in which case your options are probably either really bad subs, out of sync subs, or most likely, none whatsoever.

xdennis

As a Romanian, I'm so sick of AI translations on YouTube, especially since they use Google's translation (OpenAI's at least works quite well). Here's an example (translated back to English):

> Man Builds Background of Tan for Swedish House

It's completely puzzling. To understand it you have to know both English and Romanian. "Background of tan" is the term for "foundation" (makeup) in Romanian. That is, "foundation" has two meanings in English: for a house and for makeup, but Google uses the wrong one.

Automatic translation is full of these bad associations. I have no idea how people who don't speak English understand these translations.

Yaina

It's really sad that I'm reading "open source model" and think "hmhm, as if".

Maybe they're really using a truly open source model (probably not) but the meaning of the word is muddied already.

rvnx

https://code.videolan.org/videolan/vlc/-/merge_requests/5155

Here they are working on integrating Whisper.cpp

In the search bar it says "Updated 2 weeks ago", like if there were additional recent comments or actions in this thread that we cannot see.

So it could actually be OpenAI Whisper model, for which we have the final binary format (the weights), but not the source training data, but it is the best you can get for free.

_heimdall

The meaning of "AI" and "open source model" have both been muddied enough to be pretty meaningless.

tmtvl

Yeah, it'd be nice if we could all use 'open source' to mean 'open weights' + 'open training set', instead of just 'open weights'. I fear that ship has sailed though. Maybe call it a 'libre' model or something?

c16

Very excited for this, but a waste of energy if everyone is needing to process their video in real time.

huijzer

Why are we still talking about this? Computers are INCREDIBLY efficient and still become orders of magnitude more efficient. Computation is really negligible in the grand scheme of things. In the 80s some people also said that our whole world energy would go to computations in the future. And look today. It’s less than 1%. We do orders of magnitude more computations, but the computers have become orders of magnitude more efficient too.

As another way to look at this, where does this questioning of energy use end? Should I turn off my laptop when I go to the supermarket? When I go to the toilet? Should I turn off my lights when I go to the toilet?

My point is, we do a lot of inefficient things and there is certainly something to being more efficient. But asking “is it efficient” immediately when something new is presented is completely backwards if you ask me. It focusses our attention on new things even though many old things are WAY more inefficient.

simgt

> In the 80s some people also said that our whole world energy would go to computations in the future. And look today.

Today we consume twice as much energy as we did in the 80s (and that's mostly coming from an increase in fossil fuels consumption). Datacenters alone consume more than 1% of global energy production, that doesn't include the network, the terminals, and the energy necessary to produce all of the hardware.

xnx

> Today we consume twice as much energy as we did in the 80s

Who's "we"?

Worldwide we've gone from 4.4 tons/person to 4.7 tons/person (7% increase) from 1980 to 2023: https://ourworldindata.org/grapher/co-emissions-per-capita?t...

huijzer

> Datacenters alone consume more than 1% of global energy production

Energy-use or electricity-use? Only about 20% of total energy use is electricity [1]. I went through the math for EUV machines in more detail at https://news.ycombinator.com/threads?id=huijzer#42600790.

[1]: https://ourworldindata.org/energy/country/united-states

magic_smoke_ee

> Computers are INCREDIBLY efficient and still become orders of magnitude more efficient.

That's what a software engineer would say who views resources as unlimited and free.

baal80spam

Do you not pay for the energy you use?

lxgr

> Why are we still talking about this? Computers are INCREDIBLY efficient and still become orders of magnitude more efficient.

Because today is today, and if we can project that the energy consumption of doing a task n times on the client side outweighs the complexity of doing it once and then distributing the result somehow to all n clients, we should arguably still do it.

Sometimes it's better to wait; sometimes it's better to ship the improved version now.

> Computation is really negligible in the grand scheme of things.

Tell that to my phone burning my hand and running through a quarter of its battery for some local ML task every once in a while.

Mashimo

> Should I turn off my laptop when I go to the supermarket?

Yes.

_flux

What is the alternative?

The results could be cached, but it's probably unlikely that they would need to be used again later, as I imagine most videos are watched only once.

Another option would be to upload the generated subtitles to some service or p2p, but I believe that would also problem (e.g. privacy, who runs the service, firewalls for p2p, etc).

rvnx

It can actually send to Google the information on what you are playing:

https://github.com/videolan/vlc/blob/f908ef4981c93a8b76805ad...

and to their own servers:

https://github.com/videolan/vlc/blob/f908ef4981c93a8b76805ad...

should could fetch subtitles as the same time ?

edit: cf, what "a3w" says too.

_flux

Downloading is still much easier to handle than uploading.

scarface_74

Tangentially related: funny enough, this site was just submitted yesterday to HN - https://exampl.page/

You can navigate to $foo.exampl.page and it will generate a website on the fly with text and graphics using AI. It will then save and cache the page.

It’s admittedly a useless but cool little demo.

c16

This is exactly the answer imo. We have subtitle files, which is in effect a cache. Process once, read many times.

So while I'm excited this feature is now available, having high quality subtitles cached in one place and generated by AI is the answer imo.

magic_smoke_ee

Authoritatively-correct subtitles rather than distributed generation and/or publication by anyone and everyone, including AI.

I don't know how many times I've seen subtitles that appear to be based on a script or were half-assed, and don't match the dialogue as spoken at all.

a3w

VLC has a "check my media online" feature, next to "check for updates to VLC on startup" already. Could they offer subtitle downloads?

Previously, that was used for mp3 album covers or something?

CaptainFever

OpenSubtitles as a cache, maybe? (With their collaboration, of course.)

_flux

It seems like some kind of review process would need to be included to reduce abuse possibilities.

But yes, that would be quite nice, if there are enough people who don't mind uploading the file names of the media they play (or some other unique media identifier?) along with the subtitles to that service—and with user credentials? It would certainly need to be opt-in, which makes one wonder if it would be much effective at all.

n144q

me: waiting years for VLC to fix basic usability issues and reconcile UI across different platforms

VLC: we're gonna work on AI

thiht

The blanket criticism of AI is ridiculous. AI generated subtitles solves real issues for users.

n144q

How is criticism of VLC project management a criticism of AI?

Dude you need to level up your reasoning skills.

thiht

When you write "VLC: we're gonna work on AI" you’re clearly implying AI is worthless.

jillesvangurp

This boils down to software development not being free. In VLC's case, the development is funded by several for profit companies (like videolabs) that make their money from stuff they do with VLC (consulting, commercial services, etc.).

VLC is a good example of an OSS project that is pretty well run with decades of history that has a healthy ecosystem of people and companies earning their living supporting all that and a foundation to orchestrate the development. I don't think there ever was a lot of VC money they need to worry about. This is all organic growth and OSS working as it should.

So, this boils down to what paying customers of these companies are paying for. The project also accepts donations but those go to the foundation and not the companies. It's the companies that employ most of the developers. And you can't fault them on working on things that they value. If AI features is what they pay for then that is what they work on.

I happen to share your reservations about the UX. It's a bit old school, to put it mildy. And they obviously don't have professional designers that they work with. Like many OSS products, it looks and feels like product made by techies for techies. It doesn't bother me that much but I do notice these things. I actually talked to one of their IOS developers a few years ago. Pretty interesting person and not a huge team as I recall. I remember talking to her about some of the frustrations she had with the UX and the lack of appreciation of that. I think she moved to Netflix afterwards.

Like with most OSS projects you are welcome to take part in the meritocracy and push your favorite features or pay somebody to do that for you. But otherwise, you should just be grateful for this awesome thing existing and prospering.

imiric

> Like many OSS products, it looks and feels like product made by techies for techies.

That's not the problem. mpv is another media player that is arguably even more "made by techies for techies", yet it doesn't have the usability issues of VLC, and is a much more robust piece of software.

VLC is just poorly designed from the ground up, and the project's priorities are all over the place, as this AI initiative demonstrates.

NicuCalcea

Those are just your priorities. I don't have any usability issues with VLC, but would use the AI subtitles.

ttoinou

They don’t need designers when they are the free media player that stood the test of time and being used by the masses. It’s true organic bottom up design, tweaked little by little over the years

jillesvangurp

It's not required. But it could make their product easier to use and more usable for their users. But that's clearly not something the core team values or is passionate about and I appreciate that they have other priorities.

It's common with many OSS projects. There are a few positive exceptions. But this stuff is hard.

petee

Don't they still consider "app locks up when playlist free-loops" not a bug?

hardwaresofton

VLC is excellent on iOS -- highly recommended!

Go-to player with easy wifi loading and ability to connect to file shares to find files. Simple and actually easy to use (of course having a file server is another question)

TheAceOfHearts

AI subtitle generation seems like a useful feature. Hopefully they'll integrate with a subtitle sharing service so we don't have computers repeatedly duplicating work.

In the ideal case you'd probably generate a first pass of subtitles with AI, have a human review and tweak it as needed, and then share that. There's no reason for people to be repeatedly generating their own subtitles in most cases.

Dr4kn

Android and iOS already support live captions and AI Accelerators are becoming more common in PC hardware. If you can generate it with little compute at home, then there is no need to set up a share system.

You also want local generation in a lot of cases. If you have your own videos you need to generate them yourself. For Accessibility it's fantastic if they can have subtitles on every Video.

If generating your own is fast and good enough and takes little compute, then it isn't needed to share them. Having subtitles generated by the best models and optimized by humans is better, but not needed in most cases.

_heimdall

A system like that would be pretty nice as long as it wasn't a privacy problem. You wouldn't really need LLMs to do the subtitles as all then though, for any video common enough to be sharable the subtitles probably already exist from the original source.

newjersey

How do you do this in a privacy preserving way?

dillydogg

The only feasible way I can think of is a locally run model. Perhaps whisper?

andrepd

I think they meant the sharing

mavamaarten

Run the model on your PC. Done?

daemin

Isn't this what YouTube has been doing with automatically generated subtitles for videos for years?

I've always found their quality to be somewhat lacking.

wodenokoto

Yes and no.

Likely due to cost, Google hasn’t decided to use any LLM model (what basically everybody refers to when they say AI now) to generate the subtitles and by modern standards they aren’t very good.

nicoburns

Quality definitely can be lacking, on the other hand, it is significantly better than no subtitles at all.

eviks

Would be great to add a system of sharing the results instead of having to waste resources transcribing the same video over and over again with no chance of correction

tessellated

interesting for comparison, tweaking, analysis,..

netsharc

I wonder if VLC tracks their downloads per day or region. Similar to Pornhub Trends, it'd be interesting to correlate a spike of downloads with events like Netflix hiking up their prices/enforcing anti-password-sharing policies, dropping some series from their catalogue, or some hyped-up movie being released...

neilv

Will the ML model be fully open?

(Like open source software, so that, in theory, someone could see see the source code, source data, and process for how it was trained, and reproduce that? And could they change how that model is built, not just tuning atop it, and distribute that under the same open source license?)

Tiberium

It's highly likely that they'll be using Whisper, if so, it won't be fully open. But I can be wrong, of course.