A year of uv: pros, cons, and should you migrate
376 comments
·February 18, 2025barosl
epistasis
One other key part of this is freezing a timestamp with your dependency list, because Python packages are absolutely terrible at maintaining compatibility a year or three or five later as PyPI populates with newer and newer versions. The special toml incantation is [tool.uv] exclude-newer:
# /// script
# dependencies = [
# "requests",
# ]
# [tool.uv]
# exclude-newer = "2023-10-16T00:00:00Z"
# ///
https://docs.astral.sh/uv/guides/scripts/#improving-reproduc...This has also let me easily reconstruct some older environments in less than a minute, when I've been version hunting for 30-60 minutes in the past. The speed of uv environment building helps a ton too.
woodruffw
Maybe I'm missing something, but why wouldn't you just pin to an exact version of `requests` (or whatever) instead? I think that would be equivalent in practice to limiting resolutions by release date, except that it would express your intent directly ("resolve these known working things") rather than indirectly ("resolve things from when I know they worked").
athrun
Pinning deps is a good thing, but it won't necessarily solve the issue of transitive dependencies (ie: the dependencies of requests itself for example), which will not be pinned themselves, given you don't have a lock file.
To be clear, a lock file is strictly the better option—but for single file scripts it's a bit overkill.
TeMPOraL
Except at least for the initial run, the date-based approach is the one closer to my intent, as I don't know what specific versions I need, just that this script used to work around specific date.
procaryote
Oh that's neat!
I've just gotten into the habit of using only the dependencies I really must, because python culture around compatibility is so awful
CJefferson
This is the feature I would most like added to rust, if you don’t save a lock file it is horrible trying to get back to the same versions of packages.
HumanOstrich
Why wouldn't you save the lock file?
athrun
Gosh, thanks for sharing! This is the remaining piece I felt I was missing.
epistasis
For completeness, there's also a script.py.lock file that can be checked into version controls but then you have twice as many files to maintain, and potentially lose sync as people forget about it or don't know what to do with it.
isoprophlex
Wow, this is such an insanely useful tip. Thanks!
zelphirkalt
Why didn't you create a lock file with the versions and of course hashsums in it? No version hunting needed.
CJefferson
Because the aim is to have a single file, fairly short, script. Even if we glued the lock file in somehow, it would be huge!
I prefer this myself, as almost all lock files are in practice “the version of packages at this time and date”, so why not be explicit about that?
zahlman
A major part of the point of PEP 723 (and the original competing design in PEP 722) is that the information a) is contained in the same physical file and b) can be produced by less sophisticated users.
leni536
That's fantastic, that's exactly what I need to revive a bit-rotten python project I am working with.
sunshowers
Oooh! Do you end up doing a binary search by hand and/or does uv provide tools for that?
code_biologist
Where would binary search come into it? In the example, the version solver just sees the world as though no versions released after `2023-10-16T00:00:00Z` existed.
aragilar
My feeling sadly is because uv is the new thing, it hasn't had to handle anything but the common cases. This kinda gets a mention in the article, but is very much glossed over. There are still some sharp edges, and assumptions which aren't true in general (but are for the easy cases), and this only going to make things worse, because now there's a new set of issues people run into.
EdwardDiego
As an example of an edge case - you have Python dependencies that wrap C libs that come in x86-64 flavour and arm-64.
Pipenv, when you create a lockfile, will only specify the architecture specific lib that your machine runs on.
So if you're developing on an ARM Macbook, but deploying on an Ubuntu x86-64 box, the Pipenv lockfile will break.
Whereas a Poetry lockfile will work fine.
And I've not found any documentation about how uv handles this, is it the Pipenv way or the Poetry way?
zahlman
PEP 751 is defining a new lockfile standard for the ecosystem, and tools including uv look committed to collaborating on the design and implementing whatever results. From what I've been able to tell of the surrounding discussion, the standard is intended to address this use case - rather, to be powerful enough that tools can express the necessary per-architecture locking.
The point of the PEP 723 comment style in the OP is that it's human-writable with relatively little thought. Cases like yours are always going to require actually doing the package resolution ahead of time, which isn't feasible by hand. So a separate lock file is necessary if you want resolved dependencies.
If you use this kind of inline script metadata and just specify the Python dependency version, the resolution process is deferred. So you won't have the same kind of control as the script author, but instead the user's tooling can automatically do what's needed for the user's machine. There's inherently a trade-off there.
zanie
Yeah uv uses a platform independent resolution for its lockfiles supports features that Poetry does not, like
- Specifying a subset of platforms to resolve for
- Requiring wheel coverage for specific platforms
- Conflicting optional dependencies
https://docs.astral.sh/uv/concepts/resolution/#universal-res...
https://docs.astral.sh/uv/concepts/projects/config/#conflict...
0xCMP
I think this is an awesome feature and will probably a great alternative to my use of nix to do similar things for scripts/python if nothing else because it's way less overhead to get it running and playing with something.
Nix for all it's benefits here can be quite slow and make it otherwise pretty annoying to use as a shebang in my experience versus just writing a package/derivation to add to your shell environment (i.e. it's already fully "built" and wrapped. but also requires a lot more ceremony + "switching" either the OS or HM configs).
EdwardDiego
It's not a feature that's exclusive to uv. It's a PEP, and other tools will eventually support it if they don't already.
zelphirkalt
Will nix be slow after the first run? I guess it will have to build the deps, but in a second run should be fast, no?
kokada
`nix-shell` (that is what the OP seems to be referring) is always slow-ish (not really that slow if you are used with e.g.: Java CLI commands, but definitely slower than I would like) because it doesn't cache evaluations AFAIK.
Flakes has caching but support for `nix shell` as shebang is relatively new (nix 2.19) and not widespread.
throwup238
Agreed. I did the exact same thing with that giant script venv and it was a constant source of pain because some scripts would require conflicting dependencies. Now with uv shebang and metadata, it’s trivial.
Before uv I avoided writing any scripts that depended on ML altogether, which is now unlocked.
8n4vidtmkvmk
You know what we need? In both python and JS, and every other scripting language, we should be able to import packages from a url, but with a sha384 integrity check like exists in HTML. Not sure why they didn't adopt this into JS or Deno. Otherwise installing random scripts is a security risk
woodruffw
Python has fully-hashed requirements[1], which is what you'd use to assert the integrity of your dependencies. These work with both `pip` and `uv`. You can't use them to directly import the package, but that's more because "packages" aren't really part of Python's import machinery at all.
(Note that hashes themselves don't make "random scripts" not a security risk, since asserting the hash of malware doesn't make it not-malware. You still need to establish a trust relationship with the hash itself, which decomposes to the basic problem of trust and identity distribution.)
gregmac
Good point, but it's still a very useful way to ensure it doesn't get swapped out underneath you.
Transitive dependencies are still a problem though. You kind of fall back to needing a lock file or specifying everything explicitly.
8n4vidtmkvmk
Right, still a security risk, but at least if I come back to a project after a year or two I can know that even if some malicious group took over a project, they at least didn't backport a crypto-miner or worse into my script.
zahlman
The code that you obtain for a Python "package" does not have any inherent mapping to a "package" that you import in the code. The name overload is recognized as unfortunate; the documentation writing community has been promoting the terms "distribution package" and "import package" as a result.
https://packaging.python.org/en/latest/discussions/distribut...
https://zahlman.github.io/posts/2024/12/24/python-packaging-...
While you could of course put an actual Python code file at a URL, that wouldn't solve the problem for anything involving compiled extensions in C, Fortran etc. You can't feasibly support NumPy this way, for example.
That said, there are sufficient hooks in Numpy's `import` machinery that you can make `import foo` programmatically compute a URL (assuming that the name `foo` is enough information to determine the URL), download the code and create and import the necessary `module` object; and you can add this with appropriate priority to the standard set of strategies Python uses for importing modules. A full description of this process is out of scope for a HN comment, but relevant documentation:
AgentME
Deno and npm both store the hashes of all the dependencies you use in a lock file and verify them on future reinstalls.
8n4vidtmkvmk
The lockfile is good, but I'm talking about this inline dependency syntax,
# dependencies = ['requests', 'beautifulsoup4']
And likewise, Deno can import by URL. Neither include an integrity hash. For JS, I'd suggest import * as goodlib from 'https://verysecure.com/notmalicious.mjs' with { integrity="sha384-xxx" }
which mirrors https://developer.mozilla.org/en-US/docs/Web/Security/Subres... and https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...The Python/UV thing will have to come up with some syntax, I don't know what. Not sure if there's a precedent for attributes.
HumanOstrich
Where do you initially get the magical sha384 hash that proves the integrity of the package the first time it's imported?
8n4vidtmkvmk
Same way we do in JS-land: https://developer.mozilla.org/en-US/docs/Web/Security/Subres...
tl;dr use `openssl` on command-line to compute the hash.
Ideally, any package repositories ought to publish the hash for your convenience.
This of course does nothing to prove that the package is safe to use, just that it won't change out from under your nose.
shlomo_z
This is a nice feature, but I've not found it to be useful, because my IDE wont recognize these dependencies.
Or is it a skill issue?
zahlman
What exactly do you imagine that such "recognition" would entail? Are you expecting the IDE to provide its own package manager, for example?
TylerE
Generally it means "my inspections and autocomplete works as expected".
EdwardDiego
No, it's the fact that it's a rather new PEP, and our IDEs don't yet support it, because, rather new.
BossingAround
This looks horrible for anything but personal scripts/projects. For anything close to production purposes, this seems like a nightmare.
dagw
Anything that makes it easier to make a script that I wrote run on a colleagues machine without having to give them a 45 minute crash course of the current state of python environment setup and package management is a huge win in my book.
epistasis
There's about 50 different versions of "production" for Python, and if this particular tool doesn't appear useful to it, you're probably using Python in a very very different way than those of us who find it useful. One of the great things about Python is that it can be used in such diverse ways by people with very very very different needs and use cases.
What does "production" look like in your environment, and why would this be terrible for it?
stavros
It's not meant for production.
baq
Don’t use it in production, problem solved.
I find this feature amazing for one-off scripts. It’s removing a cognitive burden I was unconsciously ignoring.
zahlman
> As mentioned in the article, being able to have inline dependencies in a single-file Python script and running it naturally is just beautiful.
The syntax for this (https://peps.python.org/pep-0723/) isn't uv's work, nor are they first to implement it (https://iscinumpy.dev/post/pep723/). A shebang line like this requires the tool to be installed first, of course; I've repeatedly heard about how people want tooling to be able to bootstrap the Python version, but somehow it's not any more of a problem for users to bootstrap the tooling themselves.
And some pessimism: packaging is still not seen as the core team's responsibility, and uv realistically won't enjoy even the level of special support that Pip has any time soon. As such, tutorials will continue to recommend Pip (along with inferior use patterns for it) for quite some time.
> I have been thinking that a dedicated syntax for inline dependencies would be great, similar to JavaScript's `import ObjectName from 'module-name';` syntax. Python promoted type hints from comment-based to syntax-based, so a similar approach seems feasible.
First off, Python did no such thing. Type annotations are one possible use for an annotation system that was added all the way back in 3.0 (https://peps.python.org/pep-3107/); the original design explicitly contemplated other uses for annotations besides type-checking. When it worked out that people were really only using them for type-checking, standard library support was added (https://peps.python.org/pep-0484/) and expanded upon (https://peps.python.org/pep-0526/ etc.); but this had nothing to do with any specific prior comment-based syntax (which individual tools had up until then had to devise for themselves).
Python doesn't have existing syntax to annotate import statements; it would have to be designed specifically for the purpose. It's not possible in general (as your example shows) to infer a PyPI name from the `import` name; but not only that, dependency names don't map one-to-one to imports (anything that you install from PyPI may validly define zero or more importable top-level names, and of course the code might directly use a sub-package or an attribute of some module (which doesn't even have to be a class). So there wouldn't be a clear place to put such names except in a separate block by themselves, which the existing comment syntax already does.
Finally, promoting the syntax to an actual part of the language doesn't seem to solve a problem. Using annotations instead of comments for types allows the type information to be discovered at runtime (e.g. through the `__annotations__` attribute of functions). What problem would it solve for packaging? It's already possible for tools to use a PEP 723 comment, and it's also possible (through the standard library - https://docs.python.org/3/library/importlib.metadata.html) to introspect the metadata of installed packages at runtime.
sieve
Well, big fan of uv.
But... the 86GB python dependency download cache on my primary SSD, most of which can be attributed to the 50 different versions of torch, is testament to the fact that even uv cannot salvage the mess that is pip.
Never felt this much rage at the state of a language/build system in the 25 years that I have been programming. And I had to deal with Scala's SBT ("Simple Build Tool") in another life.
simonw
I don't think pip is to blame for that. PyTorch is sadly an enormous space hog.
I just started a fresh virtual environment with "python -m venv venv" - running "du -h" showed it to be 21MB. After running "venv/bin/pip install torch" it's now 431MB.
The largest file in there is this one:
178M ./lib/python3.10/site-packages/torch/lib/libtorch_cpu.dylib
There's a whole section of the uv manual dedicated just to PyTorch: https://docs.astral.sh/uv/guides/integration/pytorch/(I just used find to locate as many libtorch_cpu.dylib files as possible on my laptop and deleted 5.5GB of them)
sieve
I use uv pip to install dependencies for any LLM software I run. I am not sure if uv re-implements the pip logic or hands over resolution to pip. But it does not change the fact that I have multiple versions of torch + multiple installations of the same version of torch in the cache.
Compare this to the way something like maven/gradle handles this and you have to wonder WTF is going on here.
simonw
uv implements its own resolution logic independently of pip.
Maybe your various LLM libraries are pinning different versions of Torch?
Different Python versions each need their own separate Torch binaries as well.
At least with uv you don't end up with separate duplicate copies of PyTorch in each of the virtual environments for each of your different projects!
conradev
uv should hard link files if they’re identical like Nix does
If a package manager stores more than it needs to, it is a package manager problem.
Spivak
You're about to be pleasantly surprised then.
https://docs.astral.sh/uv/reference/settings/#link-mode
It's even the default. Here's where it's implemented if you're curious https://github.com/astral-sh/uv/blob/f394f7245377b6368b9412d...
d0mine
What makes you think it doesn't?
nighthawk454
Testing this out locally, repro'd pretty similar numbers on macOS ARM and docker. Unfortunately the CPU-only build isn't really any smaller either, thought it was
teej
You can specify platform specific wheels in your pyproject.toml
[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[tool.uv.sources]
torch = [
{ index = "pytorch-cu124", marker = "platform_system != 'Darwin'" },
{ index = "pytorch-cpu", marker = "platform_system == 'Darwin'" },
]
rsyring
If you are using uv:
$ uv cache prune
$ uv cache clean
Take your pick and schedule to run weekly/monthly.code_biologist
Light user of uv here. `prune` just saved me 1.1GiB. Thanks!
aragilar
That problem is very much not pip (pip is only the installer), the issue is: * We have a conflict between being easy to use (people don't need to work out which version of cuda/which gpu settings/libraries/etc. to use) vs install size (it's basically the x86 vs arm issue, except at least 10 fold larger). Rather than making it the end-users problem, packages bundle all possible options into a single artifact (MacOS does this same, but see the 10 fold larger issue). * The are almost fundamental assumptions (and the newer Python packaging tools, including uv, very much rely on these) that Python packaging makes that are inherently about how a system should act (basically, frozen, no detection available apart from the "OS"), which do not align with having hardware/software that much be detected. One could do this via sdists, but Windows plus the issues around dynamic metadata make this a non-starter (and hence tools like conda, spack and others from the more scientific side of the ecosystem have been created—notably on the more webby side, this problems are solved either via vendoring non-python libraries, or making it someone/something else's problem, hence docker or the cloud for databases or other companion services). * Frankly, more and more developers have no idea how systems are built (and this isn't just a Python issue). Docker lets people hide their sins, with magical invocations that just work (and static linking in many cases sadly does the same). There are tools out of the hyperscalars which are designed to solve these problems, but they solve it by creating tools that experts can wrangle many systems and hence imply you have a team which can do the wrangling.
Can this be solved? Maybe, but not by a new tool (on its own). It would require a lot of devs who may not see much improvement to their workflow change their workflow for others (to newer ones which remove the assumptions which are built in to the current workflows), plus a bunch of work by key stakeholders (and maybe even the open sourcing of some crown jewels), and I don't see that happening.
sieve
> people don't need to work out which version of cuda/which gpu settings/libraries/etc. to use
This is not true in my case. The regular pytorch does not work on my system. I had to download a version specific to my system from the pytorch website using --index-url.
> packages bundle all possible options into a single artifact
Cross-platform Java apps do it too. For e.g., see https://github.com/xerial/sqlite-jdbc. But it does not become a clusterfuck like it does with python. After downloading gigabytes and gigabytes of dependencies repeatedly, the python tool you are trying to run will refuse to do so for random reasons.
You cannot serve end-users a shit-sandwich of this kind.
The python ecosystem is a big mess and, outside of a few projects like uv, I don't see anyone trying to build a sensible solution that tries to improve both speed/performance and packaging/distribution.
aragilar
That's a pytorch issue. The solution is, as always, build from source. You will understand how the system is assembled, then you can build a minimal version meeting your specific needs (which, given wheels are a well-defined thing, you can then store on a server for reuse).
Cross-OS (especially with a VM like Java or JS) is relatively easy compared to needing specific versions for every single sub-architecture of a CPU and GPU system (and that's ignoring all the other bespoke hardware that's out there).
physicsguy
Cross platform Java doesn't have the issue because the JVM is handling all of that for you. But if you want native extensions written in C you get back to the same problem pretty quickly.
RockRobotRock
A lot of that is cuda blobs, right?
sieve
ROCm. About 40%. But there is duplication there as well. Two 16GB folders containing the exact same version.
homebrewer
Run rmlint on it, it will replace duplicate files with reflinks (if your fs supports them — xfs and btrfs do), or hardlinks if not.
IgorPartola
Sounds like a great use case for ZFS’s deduplication at block level.
forrestthewoods
Does uv have any plans for symlink/hardlink deduplication?
funnyAI
[dead]
paulddraper
> torch
Ah found the issue.
EdwardDiego
> And I had to deal with Scala's SBT ("Simple Build Tool") in another life.
I feel you.
amelius
For someone who was just about to give Scala a try, what's wrong with it and are there alternative build tools?
ATMLOTTOBEER
It defines a DSL for your build that looks roughly like Scala code. But… it’s not! And there is a confusing “resolution” system for build tasks/settings. It’s also slow as shit. See https://www.lihaoyi.com/post/SowhatswrongwithSBT.html for a comprehensive takedown. If you’re interested in just playing around with scala I would use
https://scala-cli.virtuslab.org
Or for larger projects, the thing the author of the linked article is plugging (mill).
globular-toast
Try building uv itself. Cargo used something like 40GiB of disk space somehow.
sieve
I am only criticizing python because I am actively using it.
The node ecosystem and the rust one seem to have their own issues. I have zero interest in either of them so I haven't looked into them in detail.
However, I have to deal with node on occasion because a lot of JS/CSS tooling is written using it. It has a HUGE transitive dependency problem.
BrenBarn
Like so many other articles that make some offhand remarks about conda, this article raves about a bunch of "new" features that conda has had for years.
> Being independent from Python bootstrapping
Yep, conda.
> Being capable of installing and running Python in one unified congruent way across all situations and platforms.
Yep, conda.
> Having a very strong dependency resolver.
Yep, conda (or mamba).
The main thing conda doesn't seem to have which uv has is all the "project management" stuff. Which is fine, it's clear people want that. But it's weird to me to see these articles that are so excited about being able to install Python easily when that's been doable with conda for ages. (And conda has additional features not present in uv or other tools.)
The pro and con of tools like uv is that they layer over the base-level tools like pip. The pro of that is that they interoperate well with pip. The con is that they inherit the limitations of that packaging model (notably the inability to distribute non-Python dependencies separately).
That's not to say uv is bad. It seems like a cool tool and I'm intrigued to see where it goes.
epistasis
These are good points. But I think there needs to be an explanation why conda hasn't taken off more. Especially since it can handle other languages too. I've tried to get conda to work for me for more than a decade, at least once a year. What happens to me:
1) I can't solve for the tools I need and I don't know what to do. I try another tool, it works, I can move forward and don't go back to conda
2) it takes 20-60 minutes to solve, if it ever does. I quit and don't come back. I hear this doesn't happen anymore, but to this day I shudder before I hit enter on a conda install command
3) I spoil my base environment with an accidental install of something, and get annoyed and switch away.
On top of that the commands are opaque, unintuitive, and mysterious. Do I do conda env command or just conda command? Do I need a -n? The basics are difficult and at this point I'm too ashamed to ask which of the many many docs explain it, and I know I will forget within two months.
I have had zero of these problems with uv. If I screw up or it doesn't work it tells me right away. I don't need to wait for a couple minutes before pressing y to continue, I just get what I need in at most seconds, if my connection is slow.
If you're ina controlled environment and need audited packages, I would definitely put up with conda. But for open source, personal throw away projects, and anything that doesn't need a security clearance, I'm not going to deal with that beast.
oefrha
Conda is the dreaded solution to the dreadful ML/scientific Python works-on-my-computer dependency spaghetti projects. One has to be crazy to suggest it for anything else.
uv hardly occupies the same problem space. It elevates DX with disciplined projects to new heights, but still falls short with undisciplined projects with tons of undeclared/poorly declared external dependencies, often transitive — commonly seen in ML (now AI) and scientific computing. Not its fault of course. I was pulling my hair out with one such project the other day, and uv didn’t help that much beyond being a turbo-charged pip and pyenv.
mufasachan
Eh, ML/scientific Python is large and not homogeneous. For code that should work on cluster, I would lean towards a Docker/container solution. For simpler dependancy use cases, pyenv/venv duo is alright. For some specific lib that have a conda package, it might be better to use conda, _might be_.
One illustration is the CUDA toolkit with torch install on conda. If you need a basic setup, it would work (and takes age). But if you need some other specific tools in the suite, or need it to be more lightweight for whatever reason then good luck.
btw, I do not see much interest in uv. pyenv/pip/venv/hatch are simple enough to me. No need for another layer of abstraction between my machine and my env. I will still keep an eye on uv.
nchagnet
Add to that the licensing of conda. In my company, we are not allowed to use conda because the company would rather not pay, so might as well use some other tool which does things faster.
aragilar
conda (the package) is open source, it's the installer from Anaconda Corp (nee ContinuumIO) and their package index that are a problem. If you use the installer from https://conda-forge.org/download/, you get the conda-forge index instead, which avoids the license issues.
droelf
We've been working on all the shortcomings in `pixi`: pixi.sh
It's very fast, comes with lockfiles and a project-based approach. Also comes with a `global` mode where you can install tools into sandboxed environments.
epistasis
My completely unvarnished thoughts, in the hope that they are useful: I had one JIRA-ticket-worth of stuff to do on a conda environment, and was going to try to use pixi, but IIRC I got confused about how to use the environment.yml and went back to conda grudgingly. I still have pixi installed on my machine, and when I look through the list of subcommands, it does seem to probably have a better UX than conda.
When I go to https://prefix.dev, the "Get Started Quickly" section has what looks like a terminal window, but the text inside is inscrutable. What do they various lines mean? There's directories, maybe commands, check boxes... I don't get it. It doesn't look like a shell despite the Terminal wrapping box.
Below that I see that there's a pixi.toml, but I don't really want a new toml or yml file, there's enough repository lice to confuse new people on projects already.
Any time spent educating on packaging is time not spent on discovery, and is an impediment to onboarding.
zoobab
I am trying to configure Pixi to use it with Artifactory proxy in a corporate environment, still could not figure it out how to configure it.
agoose77
Have you used the contemporary tooling in this space? `mamba` (and ~therefore, `pixi`) is fast, and you can turn off the base environment. The UX is nicer,too!
null
matsemann
Conda might have all these features, but it's kinda moot all the time no one can get them to work. My experience with conda is pulling a project, trying to install it, and it then spending hours trying to resolve dependencies. And any change would often break the whole environment.
uv "just works". Which is a feature in itself.
uneekname
Yes, conda has a lot more features on paper. And it supports non-Python dependencies which is super important in some contexts.
However, after using conda for over three years I can confidently say I don't like using it. I find it to be slow and annoying, often creating more problems than it solves. Mamba is markedly better but still manages to confuse itself.
uv just works, if your desktop environment is relatively modern. that's its biggest selling point, and why I'm hooked on it.
optionalsquid
Besides being much slower, and taking up much more space per environment, than uv, conda also has a nasty habit of causing unrelated things to break in weird ways. I've mostly stopped using it at this point, for that reason, tho I've still had to reach for it on occasion. Maybe pixi can replace those use cases. I really should give it a try.
There's also the issue the license for using the repos, which makes it risky to rely on conda/anaconda. See e.g. https://stackoverflow.com/a/74762864
BrenBarn
Not sure what you mean about space. Conda uses hardlinks for the most part, so environment size is shared (although disk usage tools don't always correctly report this).
greazy
As far as I understand, the conda-forge distribution and the channel solves a lot issues. But it might not have the tools you need.
zahlman
Good to see you again.
>Like so many other articles that make some offhand remarks about conda, this article raves about a bunch of "new" features that conda has had for years.
Agreed. (I'm also tired of seeing advances like PEP 723 attributed to uv, or uv's benefits being attributed to it being written in Rust, or at least to it not being written in Python, in cases where that doesn't really hold up to scrutiny.)
> The pro and con of tools like uv is that they layer over the base-level tools like pip. The pro of that is that they interoperate well with pip.
It's a pretty big pro ;) But I would say it's at least as much about "layering over the base-level tools" like venv.
> The con is that they inherit the limitations of that packaging model (notably the inability to distribute non-Python dependencies separately).
I still haven't found anything that requires packages to contain any Python code (aside from any build system configuration). In principle you can make a wheel today that just dumps a platform-appropriate shared library file for, e.g. OpenBLAS into the user's `site-packages`; and others could make wheels declaring yours as a dependency. The only reason they wouldn't connect up - that I can think of, anyway - is because their own Python wrappers currently don't hard-code the right relative path, and current build systems wouldn't make it easy to fix that. (Although, I guess SWIG-style wrappers would have to somehow link against the installed dependency at their own install time, and this would be a problem when using build isolation.)
BrenBarn
> The only reason they wouldn't connect up - that I can think of, anyway - is because their own Python wrappers currently don't hard-code the right relative path
It's not just that, it's that you can't specify them as dependencies in a coordinated way as you can with Python libs. You can dump a DLL somewhere but if it's the wrong version for some other library, it will break, and there's no way for packages to tell each other what versions of those shared libraries they need. With conda you can directly specify the version constraints on non-Python packages. Now, yeah, they still need to be built in a consistent manner to work, but that's what conda-forge handles.
zahlman
Ah, right, I forgot about those issues (I'm thankful I don't write that sort of code myself - I can't say I ever enjoyed C even if I used to use it regularly many years ago). I guess PEP 725 is meant to address this sort of thing, too (as well as build-time requirements like compilers)... ?
I guess one possible workaround is to automate making a wheel for each version of the compiled library, and have the wheel version move in lockstep. Then you just specify the exact wheel versions in your dependencies, and infer the paths according to the wheel package names... it certainly doesn't sound pleasant, though. And, C being what it is, I'm sure that still overlooks something.
zahlman
> I still haven't found anything that requires packages to contain any Python code (aside from any build system configuration). In principle you can make a wheel today that just...
Ah, I forgot the best illustration of this: uv itself is available this way - and you can trivially install it with Pipx as a result. (I actually did this a while back, and forgot about it until I wanted to test venv creation for another comment...)
agent281
> But it's weird to me to see these articles that are so excited about being able to install Python easily when that's been doable with conda for ages. (And conda has additional features not present in uv or other tools.)
I used conda for awhile around 2018. My environment became borked multiple times and I eventually gave up on it. After that, I never had issues with my environment becoming corrupted. I knew several other people who had the same issues and it stopped after they switched away from conda.
I've heard it's better now, but that experience burned me so I haven't kept up with it.
mkl
Having the features is not nearly as much use if the whole thing's too slow to use. I frequently get mamba taking multiple minutes to figure out how to install a package. I use and like Anaconda and miniforge, but their speed for package management is really frustrating.
droelf
Thanks for bringing up conda. We're definitely trying to paint this vision as well with `pixi` (https://pixi.sh) - which is a modern package manager, written in Rust, but using the Conda ecosystem under the hood.
It follows more of a project based approach, comes with lockfiles and a lightweight task system. But we're building it up for much bigger tasks as well (`pixi build` will be a bit like Bazel for cross-platform, cross-language software building tasks).
While I agree that conda has many short-comings, the fundamental packages are alright and there is a huge community keeping the fully open source (conda-forge) distribution running nicely.
bitvoid
I just want to give a hearty thank you for pixi. It's been an absolute godsend for us. I can't express how much of a headache it was to deal with conda environments with student coursework and research projects in ML, especially when they leave and another student builds upon their work. There was no telling if the environment.yml in a student's repo was actually up to date or not, and most often didn't include actual version constraints for dependencies. We also provide an HPC cluster for students, which brings along its own set of headaches.
Now, I just give students a pixi.toml and pixi.lock, and a few commands in the README to get them started. It'll even prevent students from running their projects, adding packages, or installing environments when working on our cluster unless they're on a node with GPUs. My inbox used to be flooded with questions from students asking why packages weren't installing or why their code was failing with errors about CUDA, and more often than not, it was because they didn't allocate any GPUs to their HPC job.
And, as an added bonus, it lets me install tools that I use often with the global install command without needing to inundate our HPC IT group with requests.
So, once again, thank you
serjester
I think at this point, the only question it remains is how Astral will make money. But if they can package some sort enterprise package index with some security bells and whistles it seems an easy sell into a ton of orgs.
BiteCode_dev
Charlie Marsh said in our interview they plan to compete with anaconda on b2b:
https://www.bitecode.dev/p/charlie-marsh-on-astral-uv-and-th...
Make sense, the market is wide open for it.
claytonjy
A scenario for "don't use uv" I hope none of you encounter: many nvidia libraries not packaged up in something better like torch.
Here's just one example, nemo2riva, the first in several steps to taking a trained NeMo model and making it deployable: https://github.com/nvidia-riva/nemo2riva?tab=readme-ov-file#...
before you can install the package, you first have to install some other package whose only purpose is to break pip so it uses nvidia's package registry. This does not work with uv, even with the `uv pip` interface, because uv rightly doesn't put up with that shit.
This is of course not Astral's fault, I don't expect them to handle this, but uv has spoiled me so much it makes anything else even more painful than it was before uv.
zahlman
>whose only purpose is to break pip so it uses nvidia's package registry. This does not work with uv, even with the `uv pip` interface, because uv rightly doesn't put up with that shit.
I guess you're really talking about `nvidia-pyindex`. This works by leveraging the legacy Setuptools build system to "build from source" on the user's machine, but really just running arbitrary code. From what I can tell, it could be made to work just as well with any build system that supports actually orchestrating the build (i.e., not Flit, which is designed for pure Python projects), and with the modern `pyproject.toml` based standards. It's not that it "doesn't work with uv"; it works specifically with Pip, by trying to run the current (i.e.: target for installation) Python environment's copy of Pip, calling undocumented internal APIs (`from pip._internal.configuration import get_configuration_files`) to locate Pip's config, and then parsing and editing those files. If it doesn't work with `uv pip`, I'm assuming that's because uv is using a vendored Pip that isn't in that environment and thus can't be run that way.
Nothing prevents you, incidentally, from setting up a global Pip that's separate from all your venvs, and manually creating venvs that don't contain Pip (which makes that creation much faster): https://zahlman.github.io/posts/2025/01/07/python-packaging-... But it does, presumably, interfere with hacks like this one. Pip doesn't expose a programmatic API, and there's no reason why it should be in the environment if you haven't explicitly declared it as a dependency - people just assume it will be there, because "the user installed my code and presumably that was done using Pip, so of course it's in the environment".
lijok
Instead of installing nvidia-pyindex, use https://docs.astral.sh/uv/configuration/indexes/ to configure the index nvidia-pyindex points to.
ziml77
Surely you can just manually add their index, right?
claytonjy
yes, but if you’re not in their carefully directed “nemo environment” the nemo2riva command fails complaining about some hydra dependency. and on it goes…
zahlman
Yes, but the point is that they automate the process for you, because it's finicky.
arnath
I think the biggest praise I can give uv is that as a non Python dev, it makes Python a lot more accessible. The ecosystem can be really confusing to approach as an outsider. There’s like 5 different ways to create virtual environments. With uv, you don’t have to care about any of that. The venv and your Python install are just handled for you by ‘uv run’, which is magic.
something98
Can someone explain a non-project based workflow/configuration for uv? I get creating a bespoke folder, repo, and uv venv for certain long-lived projects (like creating different apps?).
But most of my work, since I adopted conda 7ish years ago, involves using the same ML environment across any number of folders or even throw-away notebooks on the desktop, for instance. I’ll create the environment and sometimes add new packages, but rarely update it, unless I feel like a spring cleaning. And I like knowing that I have the same environment across all my machines, so I don’t have to think about if I’m running the same script or notebook on a different machine today.
The idea of a new environment for each of my related “projects” just doesn’t make sense to me. But, I’m open to learning a new workflow.
Addition: I don’t run other’s code, like pretrained models built with specific package requirements.
ayjay_t
`uv` isn't great for that, I've been specifying and rebuilding my environments for each "project".
My one off notebook I'm going to set up to be similar to the scripts, will require some mods.
It does take up a lot more space, it is quite a bit faster.
However, you could use the workspace concept for this I believe, and have the dependencies for all the projects described in one root folder and then all sub-folders will use the environment.
But I mean, our use case is very different than yours, its not necessary to use uv.
something98
Gotcha. Thank you.
FYI, for anyone else that stumbles upon this: I decided to do a quick check on PyTorch (the most problem-prone dependency I've had), and noticed that they recommending specifically no longer using conda—and have since last November.
bityard
I personally have a "sandbox" directory that I put one-off and prototype projects in. My rule is that git repos never go in any dir there. I can (and do) go in almost any time and rm anything older than 12 months.
In your case, I guess one thing you could do is have one git repo containing you most commonly-used dependencies and put your sub-projects as directories beneath that? Or even keep a branch for each sub-project?
One thing about `uv` is that dependency resolution is very fast, so updating your venv to switch between "projects" is probably no big deal.
zahlman
> The idea of a new environment for each of my related “projects” just doesn’t make sense to me. But, I’m open to learning a new workflow.
First, let me try to make sense of it for you -
One of uv's big ideas is that it has a much better approach to caching downloaded packages, which lets it create those environments much more quickly. (I guess things like "written in Rust", parallelism etc. help, but as far as I can tell most of the work is stuff like hard-linking files, so it's still limited by system calls.) It also hard-links duplicates, so that you aren't wasting tons of space by having multiple environments with common dependencies.
A big part of the point of making separate environments is that you can track what each project is dependent on separately. In combination with Python ecosystem standards (like `pyproject.toml`, the inline script metadata described by https://peps.python.org/pep-0723/, the upcoming lock file standard in https://peps.python.org/pep-0751/, etc.) you become able to reproduce a minimal environment, automate that reproduction, and create an installable sharable package for the code (a "wheel", generally) which you can publish on PyPI - allowing others to install the code into an environment which is automatically updated to have the needed dependencies. Of course, none of this is new with `uv`, nor depends on it.
The installer and venv management tool I'm developing (https://github.com/zahlman/paper) is intended to address use cases like yours more directly. It isn't a workflow tool, but it's intended to make it easier to set up new venvs, install packages into venvs (and say which venv to install it into) and then you can just activate the venv you want normally.
(I'm thinking of having it maintain a mapping of symbolic names for the venvs it creates, and a command to look them up - so you could do things like "source `paper env-path foo`/bin/activate", or maybe put a thin wrapper around that. But I want to try very hard to avoid creating the impression of implementing any kind of integrated development tool - it's an integrated user tool, for setting up applications and libraries.)
cdavid
That's my main use case not-yet-supported by uv. It should not be too difficult to add a feature or wrapper to uv so that it works like pew/virtualenvwrapper.
E.g. calling that wrapper uvv, something like
1. uvv new <venv-name> --python=... ...# venvs stored in a central location
2. uvv workon <venv-name> # now you are in the virtualenv
3. deactive # now you get out of the virtualenv
You could imagine additional features such as keeping a log of the installed packages inside the venv so that you could revert to arbitrary state, etc. as goodies given how much faster uv is.dagw
I've worked like you described for years and it mostly works. Although I've recently started to experiment with a new uv based workflow that looks like this:
To open a notebook I run (via an alias)
uv tool run jupyter lab
and then in the first cell of each notebook I have !uv pip install my-dependcies
This takes care of all the venv management stuff and makes sure that I always have the dependencies I need for each notebook. Only been doing this for a few weeks, but so far so good.uneekname
Why not just copy your last env into the next dir? If you need to change any of the package versions, or add something specific, you can do that without risking any breakages in your last project(s). From what I understand uv has a global package cache so the disk usage shouldn't be crazy.
Zizizizz
Just symlink the virtualenv folder and pyproject.toml it makes to whatever other project you want it to use.
BrenBarn
Yeah, this is how I feel too. A lot of the movement in Python packaging seems to be more in managing projects than managing packages or even environments. I tend to not want to think about a "project" until very late in the game, after I've already written a bunch of code. I don't want "make a project" to be something I'm required or even encouraged to do at the outset.
lmm
I have the opposite feeling, and that's why I like uv. I don't want to deal with "environments". When I run a Python project I want its PYTHONPATH to have whatever libraries its config file says it should have, and I don't want to have to worry about how they get there.
dharmab
I set up a "sandbox" project as an early step of setting up a new PC.
Sadly for certain types of projects like GIS, ML, scientific computing, the dependencies tend to be mutually incompatible and I've learned the hard way to set up new projects for each separate task when using those packages. `uv init; uv add <dependencies>` is a small amount of work to avoid the headaches of Torch etc.
iandanforth
Since this seems to be a love fest let me offer a contrarian view. I use conda for environment management and pip for package management. This neatly separates the concerns into two tools that are good at what they do. I'm afraid that uv is another round of "Let's fix everything" just to create another soon to be dead set of patterns. I find nothing innovative or pleasing in its design, nor do I feel that it is particularly intuitive or usable.
You don't have to love uv, and there are plenty of reasons not to.
wiseowise
> soon to be dead set of patterns.
Dozens of threads of people praising how performant and easy uv is, how it builds on standards and current tooling instead of inventing new incompatible set of crap, and every time one comment pops up with “akshually my mix of conda, pyenv, pipx, poetry can already do that in record time of 5 minutes, why do you need uv? Its going to be dead soon”.
tway1231789
To be fair here: conda was praised as the solution to everything by many when it was new. It did have its own standards of course. Now most people hate it.
Every packaging PEP is also hailed as the solution to everything, only to be superseded by a new and incompatible PEP within two years.
noodletheworld
So what?
If someone doesn’t want to use it, or doesn’t like it, or is, quite reasonably skeptical that “this time it’ll be different!” … let them be.
If it’s good, it’ll stand on its own despite the criticism.
If it can’t survive with some people disliking and criticising it is, it deserves to die.
Right? Live and let live. We don’t have to all agree all the time about everything.
uv is great. So use it if you want to.
And if you don’t, that’s okay too.
hiAndrewQuinn
Naive take. https://gwern.net/holy-war counsels that, in fact, becoming the One True Package Manager for a very popular programming language is an extremely valuable thing to aim towards. This is even outside of the fact that `uv` is backed by a profit-seeking company (cf https://astral.sh/about). I'm all for people choosing what works best for them, but I'm also staunchly pro-arguing over it.
falcor84
> I find nothing innovative or pleasing in its design, nor do I feel that it is particularly intuitive or usable.
TFA offers a myriad innovative and pleasing examples. It would have been nice if you actually commented on any of those, or otherwise explained why you think otherwise.
data_ders
conda user for 10 years and uv skeptic for 18 months.
I get it! I loved my long-lived curated conda envs.
I finally tried uv to manage an environment and it’s got me hooked. That a projects dependencies can be so declarative and separated from the venv really sings for me! No more meticulous tracking of a env.yml or requirements.txt just ‘uv add` and `uv sync` and that’s it! I just don’t think about it anymore
synparb
I'm also a long time conda user and have recently switched to pixi (https://pixi.sh/), which gives a very similar experience for conda packages (and uses uv under the hood if you want to mix dependencies from pypi). It's been great and also has a `pixi global` similar to `pipx`, etc the makes it easy to grab general tools like ripgrep, ruff etc and make them widely available, but still managed.
data_ders
whoa! TIL thanks will check it out
zahlman
Pip installs packages, but it provides rather limited functionality for actually managing what it installed. It won't directly spit out a dependency graph, won't help you figure out which of the packages installed in the current environment are actually needed for the current project, leaves any kind of locking up to you...
I agree that uv is the N+1th competing standard for a workflow tool, and I don't like workflow tools anyway, preferring to do my own integration. But the installer it provides does solve a lot of real problems that Pip has.
EdwardDiego
Yeah, I don't want my build tool to manage my Python, and I don't want my tool that installs Pythons to manage my builds.
In JVM land I used Sdkman to manage JVMs, and I used Maven or Gradle to manage builds.
I don't want them both tied to one tool, because that's inflexible.
Philpax
What does the additional flexibility get you? It's just one less thing to worry about when coordinating with a team, and it's easy to shift between different versions for a project if need be.
digdugdirk
I'm very much looking forward to their upcoming static type checker. Hopefully it will lead to some interesting new opportunities in the python world!
NeutralForest
uv is so much better than everything else, I'm just can't afraid they can't keep the team going. Time will tell but I just use uv and ruff in every project now tbh.
kyawzazaw
really need them to keep going
the amount of people who switch to R because Python is too hard to setup is crazy high.
Especially among the life scientists and statisticians
shlomo_z
A familiar tale: Joe is hesitant about switching to UV and isn't particularly excited about it. Eventually, he gives it a try and becomes a fan. Soon, Joe is recommending UV to everyone he knows.
EdwardDiego
Joe has found the One True God, Joe must proselytise, the true God demands it.
motorest
I know good naming is hard, and there are an awful lot of project names that clash, but naming a project uv is unfortunate due to the ubiquitous nature of libuv
inejge
I don't think it's particularly problematic, uv the concurrency library and uv the Python tool cover such non-overlapping domains that opportunities for confusion are minimal.
(The principle is recognized in trademark law -- some may remember Apple the record label and Apple the computer company. They eventually clashed, but I don't see either of the uv's encroaching on the other's territory.)
sph
Sure, there are so few backend Node.js engineers. Let alone game engine developers and Blender users with their UV mapping tools. None of these people will ever encounter Python in their daily lives.
wiseowise
[flagged]
motorest
> I don't think it's particularly problematic, uv the concurrency library and uv the Python tool cover such non-overlapping domains that opportunities for confusion are minimal.
Google returns mixed results. You may assert it's not problematic, but this is a source of noise that projects with distinct names don't have.
nonameiguess
I'm not sure that's true. uvloop, built on libuv, is a pretty popular alternative event loop for async Python, much faster than the built-in. It certainly confused me at first to see a tool called "uv" that had nothing to do with that, because I'd been using libuv with Python for years before it came out.
hyperbrainer
I think of uv in the gfx programming sense too.
A very well written article! I admire the analysis done by the author regarding the difficulties of Python packaging.
With the advent of uv, I'm finally feeling like Python packaging is solved. As mentioned in the article, being able to have inline dependencies in a single-file Python script and running it naturally is just beautiful.
After being used to this workflow, I have been thinking that a dedicated syntax for inline dependencies would be great, similar to JavaScript's `import ObjectName from 'module-name';` syntax. Python promoted type hints from comment-based to syntax-based, so a similar approach seems feasible.> It used to be that either you avoided dependencies in small Python script, or you had some cumbersome workaround to make them work for you. Personally, I used to manage a gigantic venv just for my local scripts, which I had to kill and clean every year.
I had the same fear for adding dependencies, and did exactly the same thing.
> This is the kind of thing that changes completely how you work. I used to have one big test venv that I destroyed regularly. I used to avoid testing some stuff because it would be too cumbersome. I used to avoid some tooling or pay the price for using them because they were so big or not useful enough to justify the setup. And so on, and so on.
I 100% sympathize with this.