You probably don't need query builders
155 comments
·January 21, 2025hyperpape
nycdotnet
Agree. With patterns like this you are leaning on your db server’s CPU - among your most scarce resources - versus doing this work on the client on a relatively cheap app server. At query time your app server knows if $2 or $3 or $4 is null and can elide those query args. Feels bad to use a fast language like Rust on your app servers and then your perf still sucks because your single DB server is asked to contemplate all possibilities on queries like this instead of doing such simple work on your plentiful and cheap app servers.
BeefWellington
I've seen many devs extrapolate this thinking too far into sending only the most simple queries and doing all of the record filtering on the application end. This isn't what I think you're saying -- just piggybacking to try and explain further.
The key thing here is to understand that you want the minimal correct query for what you need, not to avoid "making the database work".
The given example is silly because there's additional parameters that must be either NULL or have a value before the query is sent to the DB. You shouldn't send queries like:
SELECT \* FROM users
WHERE id = 1234
AND (NULL IS NULL OR username = NULL)
AND (NULL IS NULL OR age > NULL)
AND (NULL IS NULL OR age < NULL)
But you should absolutely send: SELECT \* FROM users
WHERE id = 1234
AND age > 18
AND age < 35
deredede
While sub-optimal, your first example is probably fine to send and I'd expect to be simplified early during query planning at a negligible cost to the database server.
What you shouldn't send is queries like:
SELECT \* FROM users
WHERE ($1 IS NULL OR id = $1)
AND ($2 IS NULL OR username = $2)
AND ($3 IS NULL OR age > $3)
AND ($4 IS NULL OR age < $4)
because now the database (probably) doesn't know the value of the parameters during planning and needs to consider all possibilities.tsarchitect
One of my rules: don't send NULL over the wire, but of course always check for NULL on the server if you're using db functions.
kamma4434
While this is no excuse for sending sloppy queries to the database server, my rule of thumb with databases - as I was told by my elders - is ”if it can be reasonably done in the database, it should be done by the database”. Data base engines are meant to be quite performant at what they do, possibly more than your own code.
cogman10
Databases aren't magic.
They have limited CPU and IO resources. They can only optimize within the bounds of the current table/index structure. And sometimes they make a bad optimization decision and need to be pushed to do the right thing.
Databases can, for example, sort things. However, if that thing being sorted isn't covered by an index then you are better off doing it in the application where you have a CPU that can do the n log n sort.
Short quips lead to lazy thinking. Learn what your database can and can't do fast and work with it. If something will be just as fast in the application as it would be in the database you should do it in the application.
I've seen the end result of the "do everything in the database" thinking and it has created some of the worst performance bottlenecks in my company. You can do almost everything in the database. That doesn't mean you should.
pocketarc
This is actually an incredible way of articulating something that's been on my mind for quite a while. Thank you for this, I will use this.
The received wisdom is, of course, to lean on the DB as much as possible, put all the business logic in SQL because of course the DB is much more efficient. I myself have always been a big proponent of it.
But, as you rightly point out, you're using up one of your infrastructure's most scarce and hard-to-scale resources - the DB's CPU.
dagss
I think there are two different concerns here though:
The article recommends something that may lead to using the wrong query plans. In the "right" conditions, you will do full table scans of all your data for every query.
This is making the DB waste a lot of CPU (and IO).
Wasting resources like that is different from just where to do work that has to be done anyway!
I am a proponent of shifting logic toward the DB, because likely it ends up there anyway and usually you reduce the resource consumption also for the DB to have as much logic as possible in the DB.
The extreme example is you want to sum(numbers) -- it is so much faster to sum it in one roundtrip to the DB, than to do a thousand roundtrips to the DB to fetch the numbers to sum them on the client. The latter is so much more effort also for the DB server's resources.
My point is: Usually it is impossible to meaningfully shift CPU work to the client of the DB, because the client needs the data, so it will ask for it, and looking up the data is the most costly operation in the DB.
scarface_74
> The received wisdom is, of course, to lean on the DB as much as possible, put all the business logic in SQL because of course the DB is much more efficient. I myself have always been a big proponent of it.
This is not received wisdom at all and the one edict I have when leading a project is no stored procedures for any OLTP functionality.
Stored Procs make everything about your standard DevOps and SDLC process harder - branching, blue green deployments and rolling back deployments.
dinosaurdynasty
There's a different reason to lean on the DB: it's the final arbiter of your data.
Much harder to create bad/poisoned data if the DB has a constraint on it (primary, foreign, check, etc) than if you have to remember it in your application (and unless you know what serializable transactions are, you are likely doing it wrong).
Also you can't do indexes outside of the DB (well, you can try).
ramchip
I think the conventional wisdom is to lean on the DB to maintain data integrity, using constraints, foreign keys, etc.; not to use it to run actual business logic.
nycdotnet
This is Brent Ozar's old theme.
null
jasode
> WHERE id = $1
>It's worth noting that this approach has significant dangers for execution performance
This extra WHERE id=$1 clause makes it behave different from the slow examples you cited from the Markus Winand blog. The query planner should notice that id column is a selective index with high cardinality (and may even be the unique ids primary key). The query optimizer can filter on id to return a single row before the dynamic NULL checks of $2,$3,$4 to avoid a full-table scan.
The crucial difference is the id clause doesn't have an extra "OR id IS NULL" or "$1 IS NULL" -- like the blog examples.
cwillu
Note that null is being used here in the “unknown value” sense in order to combine several possible queries into one query plan. Which is a bad idea for a query you need to be performant: $1 can be null (which is possible in the original stackoverflow question that this blog post is a reaction to), and if the parameters are passed in via a bound argument in a prepared statement (not uncommon), then the query plan won't necessarily be taking the passed parameters into account when deciding on the plan.
hyperpape
You are right. If that’s the query you need to write, you’ll be ok.
That said, I don’t think I’ve ever had occasion to write a query quite like that. I’ve written
select * from blah where id in (1,2,3…) and condition
or select * from blah where condition1 and condition2
but never a query quite like this. Do you know of use cases for it?Given that most queries don't look like that, I think my criticism is reasonable. For most use cases, this query will have performance downsides, even if it doesn't for some very narrow use-cases.
nycdotnet
Record selector to drive a data grid. Ex: Filter employees by location, or active/terminated, or salary/hourly, etc. and let the user choose one or many of these filters.
hombre_fatal
The use case is basically every /search endpoint or any table view where the user can select filters.
throwup238
That sounds like the developer use case. Data scientists doing ETL and analyzing messy data with weird rules like the ones above are common (although the id is usually a contains/in to handle lists of rows that don’t fit any of the conditions but must be included).
I’ve had to do some weird things to clean up data from vendor databases in several industries.
rw-access
I have hundreds of queries like this in production (Go + Postgres + pgx), and don't have issues leveraging the right indexes. Make sure when using prepared statements, you have custom query plans per query for this via `SET plan_cache_mode = force_custom_plan`.
These optimizations are trivial for Postgres to make optimization/plan time, so there's no runtime hit. But as always, profile and double check. You're definitely right that assuming can get you in trouble.
I don't have experience with the other databases to speak to their quirks. But for my specific setup, I haven't had issues.
BeefWellington
If your column named simply `id` isn't a unique index you've gone very wrong.
The rest of the query plan probably won't need much power.
hobs
It really depends on your query engine, this would be considered a "catch all query" in SQL Server, and you're going to have really bad parameter sniffing blowing you up, you do not want to do this usually.
BeefWellington
I would expect the query plan for SQL server to essentially return records matching `id` first (which again should be a situation where uniqueness comes into play) and then performing the rest of the execution on the subset that matches, which is hopefully one.
I leave allowances for `id` to be a stand-in for some other identity column that may represent a foreign key to another table. In which case I'd still expect SQL server's query planner to execute as: initial set is those where said column matches the supplied number, then further applies the logic to that subset. In fact I'd love to see where that isn't the case against a transactional DB.
crazygringo
AFAIK, that's only a risk if you're using prepared statements.
If you're just running each query individually, the parser should be smart and ignore the Boolean clauses that can be skipped, and use the appropriate indexes each time.
But yes, if you're trying to optimize performance with prepared statements, you almost certainly do not want to follow this approach if any of columns are indexed (which of course you will often want them to be).
hinkley
This may be a situation where you classify the parameters and create a handful of queries that include or exclude each class.
The same way we always have a distinct query for selecting by ID, you have one with just username, one with demographics, one with account age or activity ranges, and then C(3,2) combinations of categories.
dagss
At least for MSSQL: Never do this (before learning about query caches). Or at least, if you do, add (option recompile) to the query.
For each combination of parameters to search for you may want to use a different index.
But... the query plans are cached by query string lookup!
So it is imperative that your search string looks different for each query plan/index being used.
The code suggested here will pick a more or less random index (the one optimized for the parameters of the first execution) and stick with it for remaining executions, leading to bad queries for combinations of non-null that doesn't match the first query.
You could just add a comment inside the string that was different depending on what parameters are null, but that is no less complex than just generating the query.
PS: Of course there are situations where it fits, like if your strategy is to always use the same index to do the main scan and then filter away results from it based on postprocessing filters. Just make sure to understand this issue.
gulikoza
This ^
I debugged an app a couple of years ago that from time to time brought entire MSSQL down. The server had to be physically restarted. Nobody could figure out for years what was going on, all the queries had been analyzed and checked for missing indexes, everything was profiled... Except when an app generated a query like this which did not go fine through the cached plan.
orf
All of these are simple, almost unrealistic queries. Show me how to handle optional joins in the filter.
> My naive-self in the past used to create a fancy custom deserializer function that transformed 11,22,33,44 from a String into a Vec<i64> and that is useless work that could have easily been handled by the database.
Great, now the database has no idea what the cardinality of the IN clause is and has to generate a sub-optimal plan, because it could be 1 or it could be 10000.
The same for a lot of the other examples.
hinkley
This article just gives me the impression that the Rust query builder has terrible DevEx.
Why is adding the clause and binding variables two calls and not one? The lack of variadic functions makes this clunky but you could limit people to 3 binds per clause and that would cover 95% of people. Formatting and cognitive load both crap out around 3 anyway.
catlifeonmars
While clunky to implement, variadic calls is mostly a solved problem in rust by way of tuples and macro rules. It’s ugly to implement as a library author (generate K different declarations using macro expansions, where K is the limit of the variadic expansion), but IMO that’s secondary to the library user experience, which is fine with this approach.
MSM
Depends what you mean by optional join- if you mean you want the flexibility to build up a list of columns you actually want data from on the fly, you would probably have good results from doing left joins to the tables that contain those columns while ensuring you have a unique column (PK, Unique constraint) you're joining to. Your query engine should be smart enough to realize that it can avoid joining that table.
The logic is that you've ensured the query engine that it'll never return more than one row from the optional table so if you're not returning any actual columns from that table there's no need to think about it. Without the unique constraints the query engine has no idea how many rows may be returned (even if you aren't selecting the data) so it still needs to go through the work.
orf
I mean a series of joins that are conditional on a user filter. For example, users with orders with invoices that are paid in the last 10 days.
Anyone can string-concat simple one table queries. That’s fine. The moment you add any non-trivial joins it gets exponentially more complex.
tucnak
Laterel. Show me correlated subqueries, & I'll take it seriously.
The blog reads like exercise in Rust macros writing.
bvrmn
It seems article shows the opposite argument. SQL builders are useful not to write fragile raw sql ridden with noisy filter patterns with repeated numbered placeholders which could be easily broken on refactoring. Also it's impossible to compose queries with abstracted parts.
Shameless plug: https://github.com/baverman/sqlbind
murkt
That is a really neat library! Can see myself using it quite a lot.
bvrmn
Yep, it's quite handy for complex reporting. An original motivation was to disentangle a 1.5k lines (SQL + corresponding python code to assemble the query) of almost identical clickhouse queries. There were two big almost similar data tables with own dictionary relations. And there is a bunch of reporting with complex grouping and filtering. Around of 20 report instances. Each report was a separate monstrous SQL.
Schema changes were quite often and reports started to rot up to the point of showing wrong data.
After refactoring it became 200 lines and allows to query more reports due to increased generality.
murkt
Haven’t seen a mention in readme, but I’ve found in the sources some nicety for INSERT/VALUES too!
scott_w
I don’t get the point of this article. Just reading the samples, I strongly dislike this query builder because it looks flaky and difficult to parse by eye. And the examples get worse and worse.
This isn’t an argument against query builders, that just seems like an argument to make your query builder easier to use and understand. I wouldn’t argue against programming languages by picking bad C++ libraries.
aswerty
I see a lot of push back against this approach. And since it is something I've been experimenting with recently, this is pretty interesting stuff. Clearly it has issues with query planning getting messed up, which is not something I had been aware of since my DB size I've been experimenting with is still only in the 10s of thousands of rows. But...
Using raw SQL file addresses:
1. Very difficult for devs to expose SQL injection vulnerabilities because you need to use parameters.
2. Having all available filtering dimensions on a query makes it very clear what the type of filtering is for that particular query.
3. Easy debugging where you can just throw your query into an SQL client and play around with the parameters.
4. Very clear what the total query footprint of you application is (e.g. files all neatly listed in a dir).
5. Super readable and editable.
6. Code for running the SQL is pretty much: here is my query, here are my params, execute.
7. Etc?
So the amount of good you can get our of this approach is very high IMO.So an open question to anybody who is more familiar with DBs (and postgres in particular) than myself. Is there a reliable way to address the issue with this approach to querying that you all are flagging as problematic here. Because beyond the query planning issues, raw SQL files (with no building/templating) just seems to me like such a better approach to developing a db access layer.
mattrighetti
Thanks for summing this up! I'm also in the thousands of rows space at the moment and that's probably why I've fallen in the query planning trap that many pointed out.
lelanthran
That's basically what I did. No problems on even complex queries when I can use either CTEs or stored procedures if a single statement is sufficient.
shkkmo
This is the kind of anti-pattern that can work on toy or small projects but doesn't scale well to larger projects or groups.
> 1. Very difficult for devs to expose SQL injection vulnerabilities because you need to use parameters.
You should use parameters either way.
> 2. Having all available filtering dimensions on a query makes it very clear what the type of filtering is for that particular query.
Code is easier to document well than a SQL query
> 3. Easy debugging where you can just throw your query into an SQL client and play around with the parameters.
Query builders will give you a query you can do the same thing with.
> 4. Very clear what the total query footprint of you application is (e.g. files all neatly listed in a dir).
This seems like a design/organization choice that is separate from whether those files are query or code.
> 5. Super readable and editable.
Doesn't scale as a project grows, you end up with massive unwieldy queries or a bunch of duplicated code across a bunch of files.
> 6. Code for running the SQL is pretty much: here is my query, here are my params, execute.
It is pretty much the same with a query builder, in either case the 'execute' is calling a library where all the actual stuff happens.
If you know your project is gonna stay small with simple queries and your scope won't creep, raw SQL files might the right choice, but they will create technical debt as the project grows. It's worth the time in the long run to get comfortable with a query builder.
hinkley
Eventually people will have enough of Little Bobby Tables and url spoofing and then query engines won’t allow string concatenation at all.
The only alternative I know of is to make a query engine that exactly emulates the String Interpolation syntax of the host language and can detect string concatenation in the inputs.
But the problem with non-builders is always going to be GraphQL and advanced search boxes, where there are any of a couple dozen possible parameters and you either build one query that returns * for every unused clause or you have a factorial number of possible queries. If you don’t use a builder then Bobby always shows up. He even shows up sometimes with a builder.
deergomoo
> He even shows up sometimes with a builder
Something I’ve ran into a lot over the years is people not realising that (at least in MySQL) prepared statement placeholders can only be used for values, not identifiers like column names.
Because many query builders abstract away the creation of a prepared statement, people pass variables directly into column fields and introduce injection vulns.
Number one place I see this is data tables: you have some fancy table component where the user can control which columns to see and which to sort by. If you’re not checking these against a known good allow list, you’re gonna have a bad time.
RadiozRadioz
Yes, it varies by database whether placeholders can be used for table names.
Personally I find table names sufficiently non-dynamic that an enum of accepted values & string concatenation works. Whenever I've wanted to template user input into table names, I've realised that I should probably refactor my schema or add some views.
wredcoll
Javascript added 'tagged templates' a while ago that allows you write a function that gets passed the 'parse tree' of a template string, so js libs do a lot of 'interpolation magic' for sql queries
ericvsmith
And for a similar idea in Python see https://peps.python.org/pep-0750/
electronvolt
I mean, in C++ (17? 20? Whenever constexpr was introduced) it's totally possible to create a library that allows you to build a SQL query via the language's string concatenation libraries/etc., but only allows you to do it with static strings unless you use ~shenanigans. (C++ unfortunately always allows ~shenanigans...)
I guess you do wind up needing to potentially re-implement some basic things (or I guess more complex, if you want format string support too). But for basic string concatenation & interpolation, it's reasonable.
That's a pretty useful way to get basic string concatenation while also preventing it from creating opportunities for SQL injection.
For example, you have a class that requires a constexpr input & can be appended to/concatenated/etc.:
SqlStringPart(constexpr ...)
operator+(SqlStringPart ...)
(so on)
And you have a Query API that only takes SQL string expressions that are built out of compile time constants + parameters:
SqlQuery(SqlStringPart ..., Parameters ...);
This doesn't solve the problem mentioned in the article around pagination & memory usage, but at least it avoids letting someone run arbitrary SQL on your database.
kevingadd
Your post made me realize that custom string interpolation handlers from C# could probably be used to do injection-safe SQL querying (https://learn.microsoft.com/en-us/dotnet/csharp/advanced-top...). Makes me wonder whether scenarios like that were considered when they built that feature or if it's just a fun coincidence from making string interpolation extensible.
neonsunset
This is already a thing: https://learn.microsoft.com/en-us/ef/core/querying/sql-queri... although it uses an older API - https://learn.microsoft.com/en-us/dotnet/api/system.formatta...
Interpolated string handlers are a newer API made primarily with customization and efficiency of interpolation in mind. They are more complicated to implement and use but enable zero-cost and/or zero-allocation use cases and performing custom behavior on appending literals or data. They are also used to implement the default string interpolation handler and a few auxiliary ones like Append handler for a string builder which writes directly into it, bypassing intermediate string construction.
It's quite a mouthful to say though but very useful for advanced scenarios. Haven't seen any other language provide anything comparable.
See also: https://devblogs.microsoft.com/dotnet/string-interpolation-i...
lmm
CASE
WHEN $2 BETWEEN 0 AND 100 AND $1 > 0
THEN (($1 - 1) * $2)
ELSE
50
END
What a wonderful, maintainable language for expressing logic in /s. Perfect for my COBOL on Cogs application.The problem with SQL has never been that it's impossible to put logic in it. The problem is that it's a classic Turing Tarpit.
liontwist
That’s not what a Turing tarpit is. It’s the opposite - a language tailored to a specific useful task.
lmm
"Everything is possible but nothing of interest is easy" describes SQL perfectly in my experience.
lelanthran
Why the `/s`?
That's neither more nor less comprehensible than what I often see in python's built-in DSL within list comprehensions.
At least the SQL variant has the excuse of being designed back when language design was still in its infancy. The madness in Python's list comprehensions and the ad hoc DSL in ruby has no such excuse.
nixpulvis
`push_bind` covers a good deal of the concerns for a query builder, while letting us think in SQL instead of translating.
That said, an ORM like ActiveRecord also handles joins across related tables, and helps avoid N+1 queries, while still writing consistent access to fields.
I find myself missing ActiveRecord frequently. I know SeaORM aims to address this space, but I don't think it's there yet.
catlifeonmars
These (avoid N+1, join across related tables) seem like problems that could be solved by writing the SQL by hand. Is it that much of a lift to treat the database like an API and just write a thin library/access layer around it?
ORMs seem like they are a good fit for dynamic queries, where the end user, not the programmer, are developing the models.
Maybe I’m missing the point?
danielheath
For established software where performance matters, hand-writing the SQL is reasonable.
Hand-writing SQL for, say, a faceted filtering UI is a tedious job that takes most of a day in even fairly simple cases, or about 20 minutes with a decent ORM.
ActiveRecord (and related libraries like ActiveAdmin) are _amazing_ for rapid prototyping - eg if you don't even know whether you're going to end up keeping the faceted search.
vips7L
> For established software where performance matters, hand-writing the SQL is reasonable.
These things aren’t mutually exclusive though. Every ORM I know gives you an escape hatch to write whatever sql you want. ORMs are great for 90% of things and as a reviewer I don’t need to scrutinize their queries too much. It’s much easier to for me to review an ORM builder query because I know it’s going to do the correct joins on the correct columns. For example in the ORM I use id rather see:
query()
.where()
.eq(“parent”, parent);
Instead of: “select * from table join parent on parent.id = table.parent_id where parent.id = :parent”
dgan
Well. Query builders are composable. You can create a builder with partial query, and reuse in many queries. With sql strings, you either have to copy paste the string, or to define sql functions. It's a trade off!
camgunz
My last job had a strong "no query builders or ORMs" policy, but of course then we wanted composability, so we had in-house, half-baked implementations of both that were full of injection bugs and generated incorrect queries with miserable performance.
That's not to say there's never a place for "keep your queries as SQL files and parameterize them", just that I think your point is 100% valid: if you're unaware you're making tradeoffs, you'll at some point experience some downsides of your chosen system, and to alleviate those you might start building the system that would fit your use case better, totally unaware of the fact that you eschewed an existing, off the shelf system that would do what you want.
nixpulvis
Good point, even though copying strings isn't hard. Figuring out where in the string to inject new parts isn't always as easy. You end up with `select_part`, `where_part`, etc.
hinkley
Making identical updates to copies of the strings when a bug is discovered is hard though. People who act like it isn’t hard create most of the evidence that it is.
agumonkey
Tried to explain ORM composability at work (without praising ORM like a fanatic), most didn't care, someone said to pass context dicts for future open-nes... weird.
nixpulvis
Having had the same argument at work in the past, I feel your pain. Trying to migrate away from massive SQL files and random strings here and there to a collection of interdependent composable SQL builders is apparently a tough sell.
aswerty
From experience, this goes from many little piles of hell to sprawling piles of hell. Obviously the current situation isn't good, but the "collection of interdependent composable SQL builders" will turn into an insane system if you roll things on 5 years. Everybody will yolo whatever they want into the collection, things in the collection will almost match a use cases but not quite and you'll get 90% similar components.
Obviously that is just one persons experience. But I'd take a single crazy sql file any day of the year because it's insanity is scoped to that file (hopefully).
But I'd agree the random string are no good. Maybe refactoring them into an enum either in the code or in the DB would be a good step forward.
wredcoll
My experience is this sort of thing is cyclical. You start with one approach, you use it a lot, you start noticing the flaws, oh here's a brand new approach that solves all these flaws... and the cycle repeats.
jpalomaki
Also once you start pasting the SQL together from multiple pieces, risks of SQL injection rise.
hinkley
I don’t know if it’s still true but some databases used to be able to process prepared statements more efficiently. We ran into a bottleneck with Oracle 9i where it could only execute queries currently in the query cache. Someone fucked up our query builder so a bunch of concurrent requests weren’t using the same prepared statement and should have been.
Which I specifically told him not to do.
jprosevear
And ultimately every ORM allows raw SQL if you need to fallback
liontwist
You need a fallback for anything involving joins or column renaming.
SQL queries do not return graphs of objects, they return arrays of rows.
from-nibly
SQL isn't composable. It would be great if it was, but it isn't. So we can use query builders or write our own, but we're going to have to compose queries at some point.
rented_mule
Common Table Expressions enable a lot of composability. Using them can look like you're asking the DB to repeat a lot of work, but decent query optimizers eliminate much of that.
https://www.craigkerstiens.com/2013/11/18/best-postgres-feat...
somat
How composable do you want it?
When I want to make a complicated intermediate query that other queries can reference I create it as a view.
I will admit that future me hates this sometimes when I need to dismantle several layers of views to change something.
And some people hate to have logic in the database, Personaly I tolerate a little logic, but understand them who don't like it. any way, common table expressions (with subquery as ()...) are almost as composable as views. and have the benefit of being self contained in a single query.
liontwist
views are query composition
danielheath
Common table expressions do exist, and they compose ~alright (with the caveats that you're limited to unique names and they're kinda clunky and most ORMs don't play nice with them).
maximilianroos
SQL is terrible at allowing this sort of transformation.
One benefit of PRQL [disclaimer: maintainer] is that it's simple to add additional logic — just add a line filtering the result:
from users
derive [full_name = name || ' ' || surname]
filter id == 42 # conditionally added only if needed
filter username == param # again, only if the param is present
take 50
anonzzzies
I never looked into prql; does the ordering matter? As if not , that would be great; aka, is this the same:
from users
take 50
filter id == 42 # conditionally added only if needed
filter username == param # again, only if the param is present
derive [full_name = name || ' ' || surname]
? As that's more how I tend to think and write code, but in sql, I always jump around in the query as I don't work in the order sql works.I usually use knex or EF or such where ordering doesn't matter; it's a joy however, I prefer writing queries directly as it's easier.
bvrmn
`take 50` of all `users` records in random order and after filter the result with username and id? I hope it's the right answer and prql authors are sane.
withinboredom
I do sometimes miss RQL with RethinkDb. It was a cool database.
The recommended approach is to generate SQL that looks like:
It's worth noting that this approach has significant dangers for execution performance--it creates a significant chance that you'll get a query plan that doesn't match your actual query. See: https://use-the-index-luke.com/sql/where-clause/obfuscation/... for some related material.