We should all be using dependency cooldowns
110 comments
·November 21, 2025layer8
embedding-shape
> for critical vulnerabilities to assess whether your product is affect by it. Only then do you need to update that specific dependency right away.
This is indeed what's missing from the ecosystem at large. People seem to be under the impression that if a new release of software/library/OS/application is released, you need to move to it today. They don't seem to actually look through the changes, only doing that if anything breaks, and then proceed to upgrade because "why not" or "it'll only get harder in the future", neither which feel like solid choices considering the trade-offs.
While we've seen to already have known that it introduces massive churn and unneeded work, it seems like we're waking up to the realization that it is a security tradeoff as well, to stay at the edge of version numbers. Sadly, not enough tooling seems to take this into account (yet?).
jerf
I fought off the local imposition of Dependabot by executive fiat about a year ago by pointing out that it maximizes vulnerabilities to supply chain attacks if blindly followed or used as a metric excessively stupidly. Maximizing vulnerabilities was not the goal, after all. You do not want to harass teams with the fact that DeeplyNestedDepen just went from 1.1.54-rc2 to 1.1.54-rc3 because the worst case is that they upgrade just to shut the bot up.
I think I wouldn't object to "Dependabot on a 2-week delay" as something that at least flags. However working in Go more than anything else it was often the case even so that dependency alerts were just an annoyance if they aren't tied to a security issue or something. Dynamic languages and static languages do not have the same risk profiles at all. The idea that some people have that all dependencies are super vital to update all the time and the casual expectation of a constant stream of vital security updates is not a general characteristic of programming, it is a specific characteristic not just of certain languages but arguably the community attached to those languages.
(What we really need is capabilities, even at a very gross level, so we can all notice that the supposed vector math library suddenly at version 1.43.2 wants to add network access, disk reading, command execution, and cryptography to the set of things it wants to do, which would raise all sorts of eyebrows immediately, even perhaps in an automated fashion. But that's a separate discussion.)
skybrian
It seems like some of the arguments in favor of doing frequent releases apply at least a little bit for dependency updates?
Doing updates on a regular basis (weekly to monthly) seems like a good idea so you don't forget how to do them and the work doesn't pile up. Also, it's easier to debug a problem when there are fewer changes at once.
But they could be rescheduled depending on what else is going on.
dap
At my last job, we only updated dependencies when there was a compelling reason. It was awful.
What would happen from time to time was that an important reason did come up, but the team was now many releases behind. Whoever was unlucky enough to sign up for the project that needed the updated dependency now had to do all those updates of the dependency, including figuring out how they affected a bunch of software that they weren't otherwise going to work on. (e.g., for one code path, I need a bugfix that was shipped three years ago, but pulling that into my component affects many other code paths.) They now had to go figure out what would break, figure out how to test it, etc. Besides being awful for them, it creates bad incentives (don't sign up for those projects; put in hacks to avoid having to do the update), and it's also just plain bad for the business because it means almost any project, however simple it seems, might wind up running into this pit.
I now think of it this way: either you're on the dependency's release train or you jump off. If you're on the train, you may as well stay pretty up to date. It doesn't need to be every release the minute it comes out, but nor should it be "I'll skip months of work and several major releases until something important comes out". So if you decline to update to a particular release, you've got to ask: am I jumping off forever, or am I just deferring work? If you think you're just deferring the decision until you know if there's a release worth updating to, you're really rolling the dice.
(edit: The above experience was in Node.js. Every change in a dynamically typed language introduces a lot of risk. I'm now on a team that uses Rust, where knowing that the program compiles and passes all tests gives us a lot of confidence in the update. So although there's a lot of noise with regular dependency updates, it's not actually that much work.)
lock1
I think it also depends on the community as well. Last time I touched Node.js and Javascript-related things, every time I tried to update something, it practically guaranteed something would explode for no reason.
While my recent legacy Java project migration from JDK 8 -> 21 & a ton of dependency upgrades has been a pretty smooth experience so far.
JoshTriplett
> I'm now on a team that uses Rust, where knowing that the program compiles and passes all tests gives us a lot of confidence in the update.
That's been my experience as well. In addition, the ecosystem largely holds to semver, which means a non-major upgrade tends to be painless, and conversely, if there's a major upgrade, you know not to put it off for too long because it'll involve some degree of migration.
tracnar
You could use this funky tool from oss-rebuild which proxies registries so they return the state they were at a past date: https://github.com/google/oss-rebuild/tree/main/cmd/timewarp
pas
> "it'll only get harder in the future"
that's generally true, no?
of course waiting a few days/weeks should be the minimum unless there's a CVE (or equivalent) that's applies
null
hypeatei
> Sadly, not enough tooling seems to take this into account
Most tooling (e.g. Dependabot) allows you to set an interval between version checks. What more could be done on that front exactly? Devs can already choose to check less frequently.
mirashii
The check frequency isn't the problem, it's the latency between release and update. If a package was released 5 minutes before dependabot runs and you still update to it, your lower frequency hasn't really done anything.
silvestrov
I think the main question is: do your app get unknown input (i.e. controlled by other people).
Browsers get a lot of unknown input, so they have to update often.
A Weather app is likely to only get input from one specific site (controlled by the app developers), so it should be relatively safe.
jerf
Also, if you are updating "right away" it is presumably because of some specific vulnerability (or set of them). But if you're in an "update right now" mode you have the most eyes on the source code in question at that point in time, and it's probably a relatively small patch for the targeted problem. Such a patch is the absolute worst time for an attacker to try to sneak anything in to a release, the exact and complete opposite of the conditions they are looking for.
Nobody is proposing a system that utterly and completely locks you out of all updates if they haven't aged enough. There is always going to be an override switch.
justsomehnguy
> People in this thread are worried that they are significantly vulnerable if they don't update right away
Most of them assume what if they are working on some public accessible website then 99% of the people and orgs in the world are running nothing but some public accessible website.
duped
A million times this. You update a dependency when there are bug fixes or features that you need (and this includes patching vulnerabilities!). Those situations are rare. Otherwise you're just introducing risk into your system - and not that you're going to be caught in some dragnet supply chain attack, but that some dependency broke something you relied on by accident.
Dependencies are good. Churn is bad.
gr4vityWall
The Debian stable model of having a distro handle common dependencies with a full system upgrade every few years looks more and more sane as years pass.
It's a shame some ecosystems move waaay too fast, or don't have a good story for having distro-specific packages. For example, I don't think there are Node.js libraries packaged for Debian that allow you to install them from apt and use it in projects. I might be wrong.
embedding-shape
> For example, I don't think there are Node.js libraries packaged for Debian that allow you to install them from apt and use it in projects
Web search shows some: https://packages.debian.org/search?keywords=node&searchon=na... (but also shows "for optimizing reasons some results might have been suppressed" so might not be all)
Although probably different from other distros, Arch for example seems to have none.
kykat
It is possible to work with rust, using debian repositories as the only source.
andrewla
I'm not convinced that this added latency will help, especially if everyone uses it. It may protect you as long as nobody else uses a cooldown period, but once everyone uses one then the window between the vulnerability being introduced and it being detected will expand by a similar amount, because without exposure it is less likely to be found.
cosmic_cheese
I think there’s a much stronger argument for policies that both limit the number and complexity of dependencies. Don’t add it unless it’s highly focused (no “everything libraries” that pull in entire universes of their own) and carries a high level of value. A project’s entire dependency tree should be small and clean.
Libraries themselves should perhaps also take a page from the book of Linux distributions and offer LTS (long term support) releases that are feature frozen and include only security patches, which are much easier to reason about and periodically audit.
xmodem
I've seen this argument made frequently. It's clearly a popular sentiment, but I can't help feel that it's one of those things that sounds nice in theory if you don't think about it too hard. (Also, cards on the table, I personally really like being able to pull in a tried-and-tested implementation of code to solve a common problem that's also used by in some cases literally millions of other projects. I dislike having to re-solve the same problem I have already solved elsewhere.)
Can you cite an example of a moderately-widely-used open source project or library that is pulling in code as a dependency that you feel it should have replicated itself?
What are some examples of "everything libraries" that you view as problematic?
buu700
I think AI nudges the economics more in this direction as well. Adding a non-core dependency has historically bought short-term velocity in exchange for different long-term maintenance costs. With AI, there are now many more cases where a first-party implementation becomes cheaper/easier/faster in both the short term and the long term.
Of course it's up to developers to weigh the tradeoffs and make reasonable choices, but now we have a lot more optionality. Reaching for a dependency no longer needs to be the default choice of a developer on a tight timeline/budget.
xmodem
Let's have AI generate the same vulnerable code across hundreds of projects, most of which will remain vulnerable forever, instead of having those projects all depend on a central copy of that code that can be fixed and distributed once the issue gets discovered. Great plan!
buu700
You're attacking a straw man. No one said not to use dependencies.
tcfhgj
Won't using highly focused dependencies increase the amount of dependencies?
Limiting the number of dependencies, but then rewriting them in your own code, will also increase the maintenance burden and compile times
DrScientist
The think I find most odd about the constant pressure to update to the most recent and implied best version is that there is some implicit belief that software get's uniformly better with each release.
Bottom line those security bugs are not all from version 1.0 , and when you update you may well just be swapping known bugs for unknown bugs.
As has been said elsewhere - sure monitor published issues and patch if needed but don't just blindly update.
switchbak
I remember this used to actually be the case, but that was many moons ago when you'd often wait a long time between releases. Or maybe the quality bar was lower generally, and it was just easier to raise it?
These days it seems most software just changes mostly around the margins, and doesn't necessarily get a whole lot better. Perhaps this is also a sign I'm using boring and very stable software which is mostly "done"?
OhMeadhbh
I'm not arguing that cooldowns are a bad idea. But I will argue that the article presents a simplified version of user behaviour. One of the reasons people upgrade their dependencies is to get bug fixes and feature enhancements. So there may be significant pressure to upgrade as soon as the fix is available, cooldowns be damned!
If you tell people that cooldowns are a type of test and that until the package exits the testing period, it's not "certified" [*] for production use, that might help with some organizations. Or rather, would give developers an excuse for why they didn't apply the tip of a dependency's dev tree to their PROD.
So... not complaining about cooldowns, just suggesting some verbiage around them to help contextualize the suitability of packages in the cooldown state for use in production. There are, unfortunately, several mid-level managers who are under pressure to close Jira tickets IN THIS SPRINT and will lean on the devs to cut whichever corners need to be cut to make it happen.
[*] for some suitable definition of the word "CERTIFIED."
nine_k
There is a difference between doing
$ npm i
78 packages upgraded
and upgrading just one dependency from 3.7.2_1 to 3.7.2_2, after carefully looking at the code of the bugfix.The cooldown approach makes the automatic upgrades of the former kind much safer, while allowing for the latter approach when (hopefully rarely) you actually need a fix ASAP.
compumike
There's a tradeoff and the assumption here (which I think is solid) is that there's more benefit from avoiding a supply chain attack by blindly (by default) using a dependency cooldown vs. avoiding a zero-day by blindly (by default) staying on the bleeding edge of new releases.
It's comparing the likelihood of an update introducing a new vulnerability to the likelihood of it fixing a vulnerability.
While the article frames this problem in terms of deliberate, intentional supply chain attacks, I'm sure the majority of bugs and vulnerabilities were never supply chain attacks: they were just ordinary bugs introduced unintentionally in the normal course of software development.
On the unintentional bug/vulnerability side, I think there's a similar argument to be made. Maybe even SemVer can help as a heuristic: a patch version increment is likely safer (less likely to introduce new bugs/regressions/vulnerabilities) than a minor version increment, so a patch version increment could have a shorter cooldown.
If I'm currently running version 2.3.4, and there's a new release 2.4.0, then (unless there's a feature or bugfix I need ASAP), I'm probably better off waiting N days, or until 2.4.1 comes out and fixes the new bugs introduced by 2.4.0!
woodruffw
Yep, that's definitely the assumption. However, I think it's also worth noting that zero-days, once disclosed, do typically receive advisories. Those advisories then (at least in Dependabot) bypass any cooldown controls, since the thinking is that a known vulnerability is more important to remediate than the open-ended risk of a compromised update.
> I'm sure the majority of bugs and vulnerabilities were never supply chain attacks: they were just ordinary bugs introduced unintentionally in the normal course of software development.
Yes, absolutely! The overwhelming majority of vulnerabilities stem from normal accidental bug introduction -- what makes these kinds of dependency compromises uniquely interesting is how immediately dangerous they are versus, say, a DoS somewhere in my network stack (where I'm not even sure it affects me).
Havoc
> we should all
Except if everyone does it chance of malicious things being spotted in source also drops by virtue of less eyeballs
Still helps though in cases where maintainer spot it etc
smaudet
> also drops by virtue of less eyeballs
I don't think the people automatically updating and getting hit with the supply chain attack are also scanning the code, I don't think this will impact them much.
If instead, updates are explicitly put on cooldowns, with the option of manually updating sooner, then there would be more eyeballs, not fewer, as people are more likely to investigate patch notes, etc., possibly even test in isolation...
woodruffw
(Author of the post.)
The underlying premise here is that supply chain security vendors are honest in their claims about proactively scanning (and effectively detecting + reporting) malicious and compromised packages. In other words, it's not about eyeballs (I don't think people who automatically apply Dependabot bumps are categorically reading the code anyways), but about rigorous scanning and reporting.
null
tjpnz
You might read the source if something breaks but in a successful supply chain attack that's unlikely to happen. You push to production, go home for the evening and maybe get pinged about it by some automation in a few weeks.
theoldgreybeard
jokes on them I already have 10 year dependency cooldowns on the app I work on at work!
nitwit005
This did make me think of our app running on Java 8.
Although, I suppose we've probably updated the patch version.
marcosdumay
There's always the one that requires Java 8, and the one that requires Java >= 11.
cheschire
Retirement Driven Development
jayd16
Doesn't this mean you're leaving yourself open to known vulnerabilities during that "cool down" time?
swatcoder
No.
A sane "cooldown" is just for automated version updates relying on semantic versioning rules, which is a pretty questionable practice in the first place, but is indeed made a lot more safe this way.
You can still manually update your dependency versions when you learn that your code is exposed to some vulnerability that's purportedly been fixed. It's no different than manually updating your dependency version when you learn that there's some implementation bug or performance cliff that was fixed.
You might even still use an automated system to identify these kinds of "critical" updates and bring them to your attention, so that you can review them and can appropriately assume accountability for the choice to incorporate them early, bypassing the cooldown, if you believe that's the right thing to do.
Putting in that effort, having the expertise to do so, and assuming that accountability is kind of your "job" as a developer or maintainer. You can't just automate and delegate everything if you want people to be able to trust what you share with them.
jayd16
If you could understand the quality of updates you're pulling in, that solves the issue entirely. The point is that you can't.
There's no reason to pretend we live in a world where everyone is manually combing through the source of every dependency update.
astrobe_
TFA shows that most vulnerabilities have a "window of opportunity" smaller than one day. Are you anxious going on week-end because Friday evening a zero-day or a major bug could be made public?
jayd16
Well then you agree that the answer is yes. At the end of the article a 14 day window is mentioned but not dismissed and does not mention the downsides.
jcalvinowens
Yep. Not only vulnerabilities, but just bugs in general, which usually matter more than than vulnerabilities IMHO.
bityard
Do you believe new releases don't introduce new bugs?
jcalvinowens
Obviously. Every release introduces bugs. There's an inevitable positive correlation between the amount of code we write and the number of bugs we introduce, we just try to minimize it.
The probability of introducing bugs is a function of the amount of development being done. Releasing less often doesn't change that. In fact, under that assumption, delaying releases strictly increases the amount of time users are affected by the average bug.
People who do this tell themselves the extra time allows them to catch more bugs. But in my experience that's a bedtime story, most bugs aren't noticed until after deployment anyway.
That's completely orthogonal to slowly rolling out changes, btw.
woodruffw
To be clear, there's no reason why you can't update dependencies in advance of a cooldown period. The cooldown is an enforced policy that you can choose to override as needed.
(This also doesn't apply to vulnerabilities per se, since known vulnerabilities typically aren't evaluated against cooldowns by tools like Dependabot.)
jcalvinowens
No you can't, the cooldown period is started by the new upstream release. So if you follow this "rule" you're guaranteed to be behind the latest upstream release.
programmertote
I know it's impossible in some software stack and ecosystem. But I live mostly in the data world, so I usually could get away from such issues by aggressively keeping my upstream dependency list lean.
P.S. When I was working at Amazon, I remember that a good number of on-call tickets were about fixing dependencies (in most of them are about updating the outdated Scala Spark framework--I believe it was 2.1.x or older) and patching/updating OS'es in our clusters. What the team should have done (I mentioned this to my manager) is to create clusters dynamically (do not allow long-live clusters even if the end users prefer it that way), and upgrading the Spark library. Of course, we had a bunch of other annual and quarterly OKRs (and KPIs) to meet, so updating Spark got the lowest of priorities...
cxr
What everyone should all be doing is practicing the decades-old discipline of source control. Attacks of the form described in the post, where a known-good, uncompromised dependency is compromised at the "supply chain" level, can be 100% mitigated—not fractionally or probabilistically—by cutting out the vulnerable supply chain. The fact that people are still dragging their feet on this and resist basic source control is the only reason why this class of attack is even possible. That vendoring has so many other benefits and solves other problems is even more reason to do so.
Stacking up more sub-par tooling is not going to solve anything.
Fortunately this is a problem that doesn't even have to exist, and isn't one that anyone falls into naturally. It's a problem that you have to actively opt into by taking steps like adding things to .gitignore to exclude them from source control, downloading and using third-party tools in a way that introduces this and other problems, et cetera—which means you can avoid all of it by simply not taking those extra steps.
(Fun fact: on a touch-based QWERTY keyboard, the gesture to input "vendoring" by swiping overlaps with the gesture for "benefitting".)
dbdr
Doesn't vendoring solve the supply chain issue in the same way as picking a dependency version and never upgrading would? (especially if your package manager includes a hash of the dependency in a lock file)
People in this thread are worried that they are significantly vulnerable if they don't update right away. However, this is mostly not an issue in practice. A lot of software doesn't have continuous deployment, but instead has customer-side deployment of new releases, which follow a slower rhythm of several weeks or months, barring emergencies. They are fine. Most vulnerabilities that aren't supply-chain attacks are only exploitable under special circumstances anyway. The thing to do is to monitor your dependencies and their published vulnerabilities, and for critical vulnerabilities to assess whether your product is affect by it. Only then do you need to update that specific dependency right away.