Yt-dlp: Upcoming new requirements for YouTube downloads
122 comments
·September 24, 2025Andrews54757
ACCount37
If you ever wondered why the likes of Google and Cloudflare want to restrict the web to a few signed, integrity-checked browser implementations?
Now you know.
jasode
>If you ever wondered why the likes of Google and Cloudflare want to restrict the web
I disagree with the framing of "us vs them".
It's actually "us vs us". It's not just us plebians vs FAANG giants. The small-time independent publishers and creators also want to restrict the web because they don't want their content "stolen". They want to interact with real humans instead of bots. The following are manifestations of the same fear:
- small-time websites adding Anubis proof-of-work
- owners of popular Discord channels turning on the setting for phone # verification as a requirement for joining
- web blogs wanting to put a "toll gate" (maybe utilize Cloudflare or other service) to somehow make OpenAI and others pay for the content
We're long past the days of colleagues and peers of ARPANET and NFSNET sharing info for free on university computers. Now everybody on the globe wants to try to make a dollar, and likewise, they feel dollars are being stolen from them.
skydhash
> small-time websites adding Anubis proof-of-work
Those were already public. The issue is AI bot ddos-ing the server. Not everyone has infinite bandwith.
> owners of popular Discord channels turning on the setting for phone # verification as a requirement for joining
I still think that Discord is a weird channel for community stuff. There's a lot of different format for communication, but people are defaulting to chat.
> web blogs wanting to put a "toll gate" (maybe utilize Cloudflare or other service) to somehow make OpenAI and others pay for the content
Paid contents are good (Coursera, O'Reilly, Udemy,...). But a lot of these services wants to have free powered by ads (for audience?).
---
The fact is, we have two main bad actors: AI companies hammering servers and companies that want to centralize content (that they do not create) by adding gatekeeping extension to standard protocols.
johnebgd
It’s like we are living in an affordability crisis and people are tired of 400 wealthy billionaires profiting from peoples largess in the form of free data/tooling.
supriyo-biswas
At least for YouTube, viewbotting is very much a thing, which undermines trust in the platform. Even if we were to remove Google ads from the equation, there’s nothing preventing someone from crafting a channel with millions of bot-generated views and comments, in order to paid sponsor placements, etc.
The reasons are similar for Cloudflare, but their stances are a bit too DRMish for my tastes. I guess someone could draw the lines differently.
ACCount37
If any of this was done to combat viewbotting, then any disruption to token calculation would prevent views from being registered - not videos from being downloaded.
rwmj
I'm sure that's a problem for Youtube. What does it have to do with me rendering Youtube videos on my own computer in the way I want?
wzdd
Youtube has already accounted for this by using a separate endpoint to count watch stats. See the recent articles about view counts being down attributed to people using adblockers.
Even if they hadn't done that, you can craft millions of bot-sponsored views using a legitimate browser and some automation and the current update doesn't change that.
So I'd say Occam's razor applies and Youtube simply wants to be in control of how people view their videos so they can serve ads, show additional content nearby to keep them on the platform longer, track what parts of the video are most watched, and so on.
sporkxrocket
As a viewer, this is not even remotely my problem.
imiric
Like another comment mentioned: that's a problem for YouTube to solve.
They pay a lot of money to many smart people who can implement sophisticated bot detection systems, without impacting most legitimate human users. But when their business model depends on extracting value from their users' data, tracking their behavior and profiling them across their services so that they can better serve them ads, it goes against their bottom line for anyone to access their service via any other interface than their official ones.
This is what these changes are primarily about. Preventing abuse is just a side benefit they can use as an excuse.
ForHackernews
> which undermines trust in the platform
What? What does this even mean? Who "trusts" youtube? It's filled with disinformation, AI slop and nonsense.
codedokode
There could be valid reasons for fighting downloaders, for example:
- AI companies scraping YT without paying YT let alone creators for training data. Imagine how many data YT has.
- YT competitors in other countries scraping YT to copy videos, especially in countries where YT is blocked. Some such companies have a function "move all my videos from YT" to promote bloggers migration.
transcriptase
>AI companies
Like Google?
>scraping YT without paying YT let alone creators for training data
Like Google has been doing to the entire internet, including people’s movement, conversations, and habits… for decades?
toomuchtodo
- Enforce views of ads
(not debating the validity of this reason, but this is the entire reason Youtube exists, to sell and push ads)
Chris2048
Who says these are valid?
baxuz
Then they should allow a download API for paying customers.
supriyo-biswas
Why is this being downvoted? Are people really gonna shoot the messenger and fail to why a company may be willing to protect their competitive position?
dylan604
> For the web it requires that you run a snippet of javascript code (the challenge) in the browser to prove that you are not a bot.
How does this prove you are not a bot. How does this code not work in a headless Chromimum if it's just client side JS?
Andrews54757
Good question! Indeed you can run the challenge code using headless Chromium and it will function [1]. They are constantly updating the challenge however, and may add additional checks in the future. I suppose Google wants to make it more expensive overall to scrape Youtube to deter the most egregious bots.
piyuv
I’m a paying YouTube premium subscriber. Last weekend, I wanted to download something so I can watch it on my way in the train. The app got stuck at “waiting for download..” on my iPad. Same on iPhone. Restart did not work. I gave up after an hour (30 mins hands on trying stuff, 30 mins waiting for it to fix itself). Downloaded the video using yt-dlp, transferred it to my USB c flash drive, and watched it from that.
Awaiting their “premium cannot be shared with people outside household” policy so I can finally cancel. Family members make good use of ad-free.
femtozer
I also pay for YouTube Premium, but I still use ReVanced on my smartphone just to disable auto-translation. It’s absolute madness that users can’t configure this in the official app.
piyuv
It’ll be fixed when some product manager can offer it as a promotion project
the_af
The auto-dub feature is madness. I noticed it first a couple of days ago, I'm crossing my fingers that few authors choose to enable it, and that YouTube makes it easy to disable as a default in settings (not currently possible, you have to do it as you watch, every time).
I'm in a Spanish speaking country, but I want to watch English videos in English.
Auto-generated subtitles for other languages are ok, but I want to listen to the original voices!
meindnoch
>Awaiting their “premium cannot be shared with people outside household” policy so I can finally cancel.
Then I have good news for you! https://lifehacker.com/tech/youtube-family-premium-crackdown
In fact, I've got an email from them about this already. My YT is still ad-free though, so not sure when it's going to kick in for real.
beala
I'm also a premium subscriber, and have struggled with the same issues on the iPad app. I try to keep some shows downloaded for my toddler, and the download feature never seems to work on the first try.
I finally got so fed up, I bought a Samsung Galaxy Tab A7 off ebay for $50 and flashed it with LineageOS. I can now load whatever media I want onto the 1 TB sdcard I've installed in it. The 5 year old hardware plays videos just fine with the VLC app. And, as a bonus, I discovered that NewPipe, an alternative YouTube client I installed through the F-Droid store, is actually much more reliable at downloading videos than the official client. I was planning on using yt-dlp to load up the sdcard, but now I don't even need to do that.
ac29
> Awaiting their “premium cannot be shared with people outside household” policy so I can finally cancel
That's been a policy for a while, the sign up page prominently says "Plan members must be in the same household".
No idea if its enforced though.
beerandt
Canceled mine after ad-free stopped working on YouTube Kids of all things (on ShieldTV). Was probably a bug, but with practically no customer service options, no real solutions besides cancel.
I was also a holdover from a paying Play Music subscriber, and this was shortly after the pita music switchover to youtube, so it was a last straw.
shantara
I’m another Premium user in the same position. I use uBlock Origin and Sponsorblock on desktop and SmartTube on my TV. I pay for Premium to be able to share ad-free experience with my less technical family members, and to use their native iOS apps. If they really tighten the rules on Premium family sharing, I’ll drop the subscription in an instant.
al_borland
I’m a Premium user and primarily watch on AppleTV. A little while ago they added a feature where if I press the button to skip ahead on the remote when a sponsor section starts, it skips over the whole thing. It skips over “commonly skipped” sections.
While it doesn’t totally remove it, it lets me choose if I want to watch or not, and gets me past it in a single button press. All using the native app. I was surprised the first time this happened. I assume the creators hate it.
masklinn
Even more hilariously, if you upload to YouTube then try to download from your creator dashboard thing (e.g. because you were live-streaming and didn’t think to save a local copy or it impacts your machine too much) you get some shitty 720p render while ytdlp will get you the best quality available to clients.
yolo_420
I am a premium subscriber so I can download via yt-dlp in peace without any errors or warnings.
We are not the same.
est
I really appreciate the engineering effort went into this "JavaScript interpreter"
https://github.com/yt-dlp/yt-dlp/blob/2025.09.23/yt_dlp/jsin...
jollyllama
I wonder how long until it gets split off into its own project. For the time being, it could do with a lot more documentation. At least they've got some tests for it!
supriyo-biswas
Heh, now I wonder how much JavaScript it actually interprets and given that it’s < 1000 lines, whether it could be used towards an introductory course in compilers.
LordShredda
I'm on mobile, this seems like an actual js interpreter that only does objects and arithmetic. Impressive that it went that far
stevage
heh, that's pretty cool.
random29ah
It's almost funny, not to mention sad, that their player/page has been changed, filling it with tons of JS that makes less powerful machines lag.
For a while now, I've been forced to change "watch?v=" to "/embed/" to watch something in 480p on an i3 Gen 4, where the same video, when downloaded, uses ~3% of the CPU.
However, unfortunately, it doesn't always work anymore.
https://www.youtube.com/watch?v=xvFZjo5PgG0 https://www.youtube.com/embed/xvFZjo5PgG0
While they worsen the user experience, other sites optimize their players and don't seem to care about downloaders (pr0n sites, for example).
wraptile
Days of just getting data off the web are coming to an end as everything requires a full browser running thousands of lines of obfuscated js code now. So instead of a website giving me that 1kb json that could be cached now I start a full browser stack and transmit 10 megabytes through 100 requests, messing up your analytics and security profile and everyone's a loser. Yay.
apetresc
The writing is on the wall for easy ripping. If there's any YT content you expect you'll want to preserve for a long time, I suggest spinning up https://www.tubearchivist.com/ or something similar and archiving it now while you still can.
wintermutestwin
I agree and feel that the time is now to archive all of the truly valuable cultural and educational content that YT acquired through monopolistic means.
This solution looks interesting, but I am technical enough to know that this looks like a PITA to setup and maintain. It also seems like it is focused on downloading everything from a subbed channel.
As it is now, with a folder of downloaded videos, I just need a local web server that can interpret the video names and create an organized page with links. Is there anything like this that is very lightweight with a next next finish install?
lyu07282
They already had the proper-DRM tech for youtube movies for years, why didn't they already turn that on for all content?
Mindwipe
YouTube's delivery scale is enormous and adding additional complexity if they don't have to is probably considered a no no.
But if they decide they have to, they can do it fairly trivially.
AbuAssar
on why they chose Deno instead of node:
"Other JS runtimes (node/bun) could potentially be supported in the future, the issue is that they do not provide the same security features and sandboxing that deno has. You would be running untrusted code on your machine with full system access. At this point, support for other JS runtimes is still TBD, but we are looking in to it."
adzm
I was surprised they went with Deno instead of Node, but since Deno has a readily available single-exe distribution that removes a lot of potential pain. This was pretty much just a matter of time, though; the original interpreter in Python was a brilliant hack but limited in capability. It was discussed a few years ago for the YouTube-dl project here https://news.ycombinator.com/item?id=32793061
nicce
Node does not have the concept of security and isolation like the Deno has. There is maintainer comment in the same thread.
arbll
The sandboxing features of Deno also seem to have played a role in that choice. I wouldn't overly trust that as a security layer but it's better than nothing.
CuriouslyC
Deno sandboxing is paper thin, last time I looked they had very simple rules. It's a checkbox feature. If you want isolation use WASM.
ndjddirbrbrbfi
It doesn’t have granularity in terms of what parts of the code have what permission - everything in the same process has the same permission, but aside from that I’m not sure what you mean about it being paper thin. Certainly WASM is a great option, and I think it can facilitate a more nuanced capabilities model, but for cases like this AFAIK Deno should be secure (to the extent that V8 is secure, which Chrome’s security depends on).
It being a checkbox feature is a weird way to frame it too, because that typically implies you’re just adding a feature to match your competitors, but their main competitors don’t have that feature.
In what ways does it fall short? If there are major gaps, I’d like to know because I’ve been relying on it (for personal projects only myself, but I’ve recommended it to others for commercial projects).
m_ke
I used to work on video generation models and was shocked at how hard it was to find any videos online that were not hosted on YouTube, and YouTube has made it impossibly hard to download more than a few videos at a time.
aihell
Congrats, you're the problem.
fibers
you have to feed it multiple arguments with rate limiting and long wait times. i am not sure if there have been recent updates other than the js interpreter but ive had to spin up a docker instance of a browser to feed it session cookies as well.
m_ke
Yeah we had to roll through a bunch of proxy servers on top of all the other tricks you mentioned to reliably download at a decent pace
nikcub
Just the other day there was a story posted on hn[0][1] that said YouTube secretly wants downloaders to work.
It's it's always been very apparent that YouTube are doing _just enough_ to stop downloads while also supporting a global audience of 3 billion users.
If the world all had modern iPhones or Android devices you'd bet they'd straight up DRM all content
sphars
This will be interesting to see how it affects the numerous Android apps on F-Droid that are essentially wrappers around yt-dlp to create a YouTube Music clone.
progbits
Can anyone explain specifically what the YT code does that the existing python interpreter is unusable and apparently quickjs takes 20 minutes to run it?
Is it just a lot of CPU-bound code and the modern JIT runtimes are simply that much faster, or is it doing some trickery that deno optimizes well?
progbits
From https://github.com/ytdl-org/youtube-dl/issues/33186
> Currently, a new style of player JS is beginning to be sent where the challenge code is no longer modular but is hooked into other code throughout the player JS.
So it's no longer a standalone script that can be interpreted but it depends on all the other code on the site? Which could still be interpreted maybe but is a lot more complex and might need DOM etc?
Just guessing here, if anyone knows the details would love to hear more.
zelphirkalt
Sounds like a really silly way to engineer things, but then again Google has the workforce to do lots of silly things and the cash to burn, so they can afford it.
zenmac
Yeah that is guess google using spaghetti code to keep their yt moat.
Chris2048
Could something like tree-shaking be used to reduce the player code to just the token generating bit? Or does the whole player js change for each video?
ACCount37
YouTube is mining cry-
I mean, running some unknown highly obfuscated CPU-demanding JS code on your machine - and using its results to decide whether to permit or deny video downloads.
The enshittification will continue until user morale improves.
Nsig/sig - Special tokens which must be passed to API calls, generated by code in base.js (player code). This is what has broken for yt-dlp and other third party clients. Instead of extracting the code that generates those tokens (eg using regular expressions) like we used to, we now need to run the whole base.js player code to get these tokens because the code is spread out all over the player code.
PoToken - Proof of origin token which Google has lately been enforcing for all clients, or video requests will fail with a 404. On android it uses DroidGuard, for IOS, it uses built in app integrity apis. For the web it requires that you run a snippet of javascript code (the challenge) in the browser to prove that you are not a bot. Previously, you needed an external tool to generate these PoTokens but with the Deno change yt-dlp should be now capable of producing these tokens by itself.
SABR - Server side adaptive bitrate streaming, used alongside Google's UMP protocol to allow the server to have more control over buffering, given data from the client about the current playback position, buffered ranges, and more. This technology is also used to do server-side ad injection. Work is still being done to make 3rd party clients work with this technology (sometimes works, sometimes doesn't).
Nsig/sig extraction example: https://github.com/LuanRT/YouTube.js/blob/ee9c184eeb02d1074e...
PoToken generation: https://github.com/LuanRT/BgUtils
SABR: https://github.com/LuanRT/googlevideo
EDIT: Added links to specific code snippets