My takeaways from DjangoCon EU 2025
114 comments
·April 27, 2025gitroom
Arbortheus
Django is still great.
I recently upgraded two ~10 year old aging legacy applications at work. One was in Flask, and one in Django. This made me appreciate the "batteries included" philosophy of Django a lot more.
Even though the django legacy application was much larger, it had barely any extensions to "vanilla django". Comparably, the flask application had a dozen third-party flask-* dependencies that provided functionality like auth, permissions, and other features that Django has built-in. Many of these dependencies were archived/abandonware and hadn't been maintained in a decade.
When it came to upgrading the Django app, I had one giant release notes page to read. I didn't need to switch any packages, just make some pretty simple code changes for clearly documented deprecations. For the Flask app I had to read dozens of release notes pages, migrate to new maintained packages, and rework several untested features (see: legacy application).
In my mind, "batteries included" is an underrated philosophy of Djangoo. Also, it is now such a mature ecosystem it is unlikely there will be any radical breaking changes.
Perhaps there are some parallels to draw with newer trendy (but minimalistic) python frameworks like FastAPI.
If I were building a web application I wanted to last a decade or more, Django would be up there in tech choices - boring and sensible, but effective.
explodes
I haven't used Django in about 10 or 12 years but I cracked it open the day. It was cool to see all the things I loved are largely unchanged; I was able to step right back in.
BiteCode_dev
The ecosystem has improved thought. django-ninja is great for API, django-coton brings component supports and you have better options than celery for qeueing.
sgt
I've stuck to DRF for all these years. But wouldn't mind looking at django-ninja. Is it better?
Scarblac
There is buzz around the combination of Django and HTMX, worked on by the same people in one team, as a much simpler alternative to split frontend and backend teams with a REST API in between (and perhaps NextJS as well, etc).
fidotron
Are people choosing Django for new projects much these days?
sgt
Absolutely. For what it does, Django is pretty much the best full stack Python web framework there is. It's also a great way to rapidly develop (just sticking to synchronous, which Django is best at).
One can then later consider spinning certain logic off into a separate service (e.g. in Golang), if speed is a concern with Python.
ashwinsundar
I chose Django + htmx and a small amount of Alpine.js for a full-stack software project that is currently being launched. I had zero professional experience with Django (or Python really) before starting. I was able to develop the entire application on my own, in my spare time, and had time left over to also handle infrastructure and devops myself.
I prefer Python and it's web frameworks over Typescript/React because there is a lot more stability and lot less "framework-of-the-week"-itis to contend with. It's much easier to reason about Django code than any React project I've worked on professionally. IMO when you don't have a firehose of money aimed at you, then Python is the way to go
sgt
Yes, you can do the whole web app (or web site if you will) like that without any complicated dependencies. This will be great if you touch the project only now and again e.g. the typical side project that you want to still work in the year 2030 without major changes.
Yet the approach also scales up to enterprise grade project, leveraging DRF, Django-cotton and so on (and htmx).
pabe
Yes. Still one of the best batteries included web frameworks for creating anything that's more of a website (e.g. E-Commerce) than a web app (e.g. Photoshop). No, you don't need NextJs and friends for everything ;)
tcdent
I just rolled a backend using FastAPI and SQLAlchemy and it made me miss Django.
Too much other stuff going on in this app to incorporate Django, but it's still way ahead of the curve compared to bringing together independent micro frameworks.
sgt
It's brave trying FastAPI if you haven't tried it before. Going async is going to be quite different and you need to be more careful when designing your API. Most people will never need it.
This is why most folks just needing a plain Python API without anything else, they usually go for Flask, which is vastly simpler. For a more complete web app or site, I would recommend Django.
thenaturalist
Out of naive curiosity of considering your first stack vs. Django: What makes Django so way ahead of the curve?
tcdent
The ORM is so so so much better designed that SQLAlchemy v2. Performing queries, joins, executing in transactions all feels clean and concise. The latter feels dated and I find it hard to believe there's not a widely accepted replacement yet.
In terms of views, route configuration and Django's class-based views are sorely missed when using FastAPI. The dependency pattern is janky and if you follow the recommended pattern of defining your routes in decorators it's not obvious where your URL structure is even coming from.
BiteCode_dev
django-ninja will give you a fastapi like experience without the hassle
fhd2
All the time.
1. Very easy to find developers for. Python developers are everywhere, and even if they haven't worked with Django, it's incredibly easy to learn.
2. Simple stuff is ridiculously fast, thanks to the excellent ORM and (to my knowledge fairly unique) admin.
3. It changes surprisingly little over time, pretty easy to maintain.
ranger_danger
My only criticism is that die-hard django devs constantly brush aside the admin and can't stop telling people not to use it. I think it's a huge mistake.
It's extremely well-designed and extensible, there is no reason to reinvent the wheel when so much time and effort has been put into it.
They will complain of things like "eventually you will have to start over with a custom solution anyway"... but whatever gripes they have, could just be put into improving the admin to make it better at whatever they're worried about.
Personally I've not run into something I couldn't make work in the admin without having to start over. My own usecases have been CRUD for backoffice users/management and I've had great success with that at several different companies over the last ~15 years.
People will say "it's only for admins you trust" yet it has very extensive permissions and form/model validation systems heavily used in the admin and elsewhere, and they are easily extensible.
fhd2
I use the heck out of the admin. I wouldn't say I'm super experienced with Django, but my solution to users being overwhelmed by doing things there is to build a bit of extra UI just for the stuff they need to do, more Rails style. Meaning: I don't go into fighting too much with the admin when what it can do out of the box is not feasible for some. But even if the admin ends up being used only by devs and some trained folks, I think it has amazing utility.
andybak
Absolutely!
I've been saying the same thing for decades (checks calendar - almost literally!)
jdboyd
The admin is at least 75% of why I choose Django over another framework.
vFunct
LLMs are experts at Django, as there's 20 years of training data on it as well as just being written in the world's most popular language. LLMs can pump out full featured Django sites like anything.
I don't know why anyone would use any other framework.
ropable
We've used (and continue to use) Django for bespoke applications for a decade and a half now. It continues to be the most well-supported, well-governed, well-documented, batteries-included, extensible web framework of all the ones we've tried. Finding developers with experience using it (or upskilling them) is easy. As a choice of web technology, it's one of those that we've never regretted investing in.
cjauvin
For a complete solution requiring many traditional high-level components like templating, forms, etc, then yes, clearly Django. But for something looking more like a REST API, with auto-generated documentation, I would nowadays seriously consider FastAPI, which, when used with its typed Pydantic integration, provides a very powerful solution with very little code.
wahnfrieden
Django Ninja?
macNchz
Works great, I've been using it in production for a few years. DRF was one of my least favorite bits of the Django world and Ninja has been an excellent alternative.
I still love Django for greenfield projects because it eliminates many decision points that take time and consideration but don't really add value to a pre-launch product.
tmnvix
Thanks for the summary. Looking forward to the videos becoming available.
> I talked to this speaker afterward, and asked him how they did nested modals + updating widgets in a form after creating a new object in a nested modal. He showed me how he did it, I've been trying to figure this out for 8 months!
Do share!
benwilber0
> Always use a BigInt (64 bits) or UUID for primary keys.
Use bigint, never UUID. UUIDs are massive (2x a bigint) and now your DBMS has to copy that enormous value to every side of a relation.
It will bloat your table and indexes 2x for no good reason whatsoever.
Never use UUIDs as your primary keys.
bsder
> Never use UUIDs as your primary keys.
This seems like terrible advice.
For the vast, vast, vast majority of people, if you don't have an obvious primary key, choosing UUIDv7 is going to be an absolute no-brainer choice that causes the least amount of grief.
Which of these is an amateur most likely to hit: crash caused by having too small a primary key and hitting the limit, slowdowns caused by having a primary key that is effectively unsortable (totally random), contention slowdowns caused by having a primary key that needs a lock (incrementing key), or slowdowns caused by having a key that is 16 bytes instead of 8?
Of all those issues, the slowdown from a 16 byte key is by far the least likely to be an issue. If you reach the point where that is an issue in your business, you've moved off of being a startup and you need to cough up real money and do real engineering on your database schemas.
sgarland
The problem is that companies tend to only hire DB expertise when things are dire, and then, the dev teams inevitably are resistant to change.
You can monitor and predict the growth rate of a table; if you don’t know you’re going to hit the limit of an INT well in advance, you have no one to blame but yourself.
Re: auto-incrementing locks, I have never once observed that to be a source of contention. Most DBs are around 98/2% read/write. If you happen to have an extremely INSERT-heavy workload, then by all means, consider alternatives, like interleaved batches or whatever. It does not matter for most places.
I agree that UUIDv7 is miles better than v4, but you’re still storing far more data than is probably necessary. And re: 16 bytes, MySQL annoyingly doesn’t natively have a UUID type, and most people don’t seem to know about casting it to binary and storing it as BINARY(16), so instead you get a 36-byte PK. The worst.
benwilber0
> contention slowdowns caused by having a primary key that needs a lock (incrementing key)
This kind of problem only exists in unsophisticated databases like SQLite. Postgres reserves whole ranges of IDs at once so there is never any contention for the next ID in a serial sequence.
sgarland
I think you’re thinking out the cache property of a sequence, but it defaults to 1 (not generating ranges at once). However, Postgres only needs a lightweight lock on the sequence object, since it’s separate from the table itself.
MySQL does need a special kind of table-level lock for its auto-incrementing values, but it has fairly sophisticated logic as of 8.0 as to when and how that lock is taken. IME, you’ll probably hit some other bottleneck before you experience auto-inc lock contention.
gruez
>Use bigint, never UUID. UUIDs are massive (2x a bigint) and now your DBMS has to copy that enormous value to every side of a relation.
"enormous value" = 128 bits (compared to 64 bits)
In the worst case this causes your m2m table to double, but I doubt this has a significant impact on the overall size of the DB.
tpm
I was in the never-UUID camp, but have been converted. Of course depends on how much do you depend on your PKs for speed, but using UUIDs has a a great benefit in that you can create a unique key without a visit to the DB, and that can enormously simplify your app logic.
sgarland
I’ve never understood this argument. In every RDBMS I’m aware of, you can either get the full row you just inserted sent back (RETURNING clause in Postgres, MariaDB, and new-ish versions of SQLite), and even in MySQL, you can access the last auto-incrementing id generated from the cursor used to run the query.
tpm
Now imagine that storing the complete model is the last thing you do in a business transaction. So the workflow is something like 'user enters some data, then over the course of the next minutes adds more data, the system contacts various remote services that too can take long time to respond, the user can even park the whole transaction for the day and restore it later', but you still want to have an unique ID identifying this dataset for logging etc. There is nothing you can insert at the start (it won't satisfy the constraints and is also completely useless). So you can either create a synthetic ID at the start but it won't be the real ID when you finally store the dataset. Or you can just generate an UUID anywhere anytime and it will be a real ID of the dataset forever.
sgt
I also do that for convenience. It helps a lot in many cases. In other cases I might have tables that may grow into the millions of rows (or hundreds of millions), then I'd absolutely not use UUID PK's for those particular tables. And I'd also shard them across schemas or multiple DBs.
outside1234
If you don't have a natural primary key (the usual use case for UUIDs in distributed systems such that you can have a unique value) how do you handle that with bigints? Do you just use a random value and hope for no collisions?
benwilber0
You use a regular bigint/bigserial for internal table relations and a UUID as an application-level identifier and natural key.
hellojesus
Wouldn't you just have an autoincrementing bigint as a surrogate key in your dimension table?
Or you could preload a table of autoincremented bigints and then atomically grab the next value from there where you need a surrogate key like in a distributed system with no natural pk.
outside1234
Yes, if you have one database. For a distributed system though with many databases sharing data, I don't see a way around a UUID unless collisions (the random approach) are not costly.
rowanseymour
And assuming we're not talking v7 UUIDs.. your indexes are gonna have objects you might commonly fetch together randomly spread everywhere.
LunaSea
But if you use sequential integers as primary key, you are leaking the cardinality of your table to your users / competitors / public, which can be problematic.
varispeed
Is it really enormous? bigint vs UUID is similar to talking about self-hosting vs cloud to stakeholders. Which one has bigger risk of collision? Is the size difference material to the operations? Then go with the less risky one.
rowanseymour
You shouldn't be using BIGINT for random identifiers so collision isn't a concern - this is just to future proof against hitting the 2^31 limit on a regular INT primary key.
hu3
I made a ton of money because of this mistake.
Twice now I was called to fix UUIDs making systems crawl to stop.
People underestimate how important efficient indexes are on relational databases because replacing autoincrement INTs with UUIDs works well enough for small databases, until it doesn't.
My gripe against UUIDs is not even performance. It's debugging.
Much easier to memorize and type user_id = 234111 than user_id = '019686ea-a139-76a5-9074-28de2c8d486d'
seanwilson
It's easy to get something quick working with HTMX and Django, but if you want robust UI tests that actually test what happens when users click stuff, don't you need to use something like Playwright? This can be pretty heavy, slow and flaky, compared to regular Django tests?
I find with HTMX, it can introduce a lot of edge cases to do with error handling, showing loading progress, and making sure the data on the current page is consistent when you're partially updating chunks of it. With the traditional clunky full-page-refresh Django way, you avoid a lot of this.
flakiness
It looks like htmx is popular in the Django community. Is there any background story that made this? (Context: Just picked Django for a hobby project. Don't know much about Webdev trend beyond, like, what are talked about on the HN top page.)
wahnfrieden
Server side template rendering is popular already and well supported in Django ecosystem
neural_embed
Some of the talks look really interesting — are there any YouTube videos linked? I couldn’t find those.
null
Pretty cool seeing how people still go for Django even with so many new frameworks, always makes me wanna go back to it when stuff gets messy tbh