Things I would have told myself before building an autorouter
118 comments
·March 28, 2025ChrisGammell
seveibar
We will absolutely be supporting KiCad! Placement is something we have big plans for but I think it’s important to have a really fast, cache-friendly autorouter as a foundation (if you are cache friendly, moving components and trying different layouts is much much faster)
Javascript is fairly portable now with many runtimes (some even quite small such as QuickJS or Proffor), so I anticipate people will be able to run locally and build their own ginormous cache :)
I think everyone should be concerned about lockin and fragmenting ecosystems in EDA, but tscircuit and our autorouter are MIT-permissive technology (very rare in EDA) so we play nice (interoperable) with everyone.
micw
> We will absolutely be supporting KiCad
I wonder why there's no standard API for autorouting so that a particular EDA must be supported. I'd love to see autorouter as a standardized HTTP endpoint so that I can run one locally or even buy Autorouter-As-A-Service. E.g. I'd happily pay for TopoR¹ as a service for some projects while others are fine with classic rouing.
seveibar
Software innovation in EDA seems pretty slow and is more IP-sensitive, so there hasn't been a huge proliferation of cloud services. We're trying to bring some up some standards that make EDA more web-friendly, like the Circuit JSON[1] format and the Simple Route JSON format[2], but it'll still be years before the tools support web-first standards (kind of reminds me of png vs webp)
buescher
The de facto standard external autorouter workflow is through specctra files.
_fizz_buzz_
How are you planning to support kicad? Would it be using kicad's new IPC API? In principle it should be possible to create javascript bindings for the new protobuf interface.
seveibar
We are looking hard at the IPC interface or doing a plugin, but we will also support “uploading” kicad_sch/kicad_pcb to a browser (local-first, so not actually uploading)
cushychicken
Hey - take a look at my other reply to u/ChrisGammell - I have some input on your project you might find useful that I don’t feel like retyping lol
buescher
Years ago the long-gone and unlamented OrCAD Layout had a spreadsheet view of your nets that was a slightly better than good-enough interface for setting constraints for autorouting. Once you got your footprints, placement, constraints, and hand-routed nets locked down, you could iterate very quickly.
It's nice to see anyone at all working on PCB autorouters, which have been pretty stagnant since Cadence bought SPECCTRA in the nineties. The guys who wrote SPECCTRA went to the VLSI world iirc and never came back - I guess that's where all the fame and fortune is. Probably was a patent minefield for a while there too; might still be. Autoplacement was an utterly intractable problem back then, still seems to be, and it's probably ripe for a generative AI approach - a good generative AI first pass at parts placement could be a net time saver. The biggest problem would be convincing the cranks that you can be good even if you can't be perfect.
I'm sort of puzzled by the kids out there trying to do schematics-as-code. On the one hand, I would love to see that work as a backend format - especially where folks like the jitx guys seem to be making a lot of progress in encoding appnote and datasheet-level design rules into parts models. Reading every datasheet in the detail needed for commercial designs is more work than people expect, and so is getting junior engineers on board with the process. So any automation there is beneficial. The problem is that the approaches all seem rooted in this idea of schematics as data entry for layout, a kind of source code, rather than as design documentation with a carefully evolved visual language that needs to be accessible to people who won't have an EDA suite installed on their computer. I guess people who learned from puzzling out schematics in the Adafruit/Sparkfun/Shenzhen style with explicit wiring minimized might not understand the value of a good schematic.
The other thing is leaning too far into thinking-by-analogy and trying to make PCB-level design like VLSI design. I don't think it's entirely impossible - if we had better tools for DRC and validation, component-level design would look more like VLSI design. But the design space is so different, with such loose coupling between design, EDA/CAM/simulation, validation, fabricators, assemblers, component vendors, regulators/certifiers, and so on that just getting one corner of it really right would be a major accomplishment.
Joel_Mckay
For any benefit the auto-router adds, it usually ends up costing a project later...
In general, impedance controlled UHF design work with domain specific simulation tools is the modern workflow. Thus, one manually does the critical tracks first, island-pours, and finally power interconnects.
KiCad is slightly better than nothing for layout, and trying to make it into yet another half-baked simulation tool seemed silly. =3
cushychicken
I wish the Kicad team had sunk all the time they put into simulation support (which I rarely use, in or out of kicad) into a proper constraint system instead.
If I need to sim, I’ll just use LTspice or TI TINA - I.e. the simulation ecosystem that includes the part model I need, with no wrangling to get that spice model to play nicely in some other spice toolchain.
Joel_Mckay
Micro-cap 12 is still available, was made free, and runs in Wine64 just fine.
https://web.archive.org/web/20230214034946/http://www.spectr...
For learning, the current LTSpice models and QUCS VNA profiled device imported simulation data are reasonable tools as well.
I used KiCad all the time for personal projects, and it has recently already addressed the major issues people found annoying:
1. unstable library version dependency management
2. complex design collaboration over traditional versioning tools like git (still an upstream feature last I checked, but should be available soon.)
3. freezing the specific KiCad and library path structure versions across a project, as assuming every design has a single workstation workflow is a contradictory goal
KiCad is better than most current EDA software, but that is mostly a sad reflection on the traditional tools available. =3
_fizz_buzz_
I love the ngspice integration in kicad it has gotten really good since kicad 7 or 8.
cushychicken
The tools are fractured, the existing players are lumbering behemoths, and the users are cranky zealots (you will have to pry KiCad out of my cold dead hands).
These last five years have been absolutely incredible to watch in Kicad’s development. Last two releases have added the two major things that Kicad didn’t have that professional CAD tools did:
* database support
* outjob features
Beyond that, it’s really just a matter of adoption and end user preference of how to put these features to work. (Databases tend to come with a lot of internal corporate bureaucracy around organizing data than anything else.)
But, for the topic at hand: don’t you think Kicad is already moving sort of in the direction you talk about, as far as workflow to speed up layout is concerned?
Best example I can think of is the “trace autocomplete” feature in I think 7.0. Hit hotkey (I think it’s F in pcbnew) and the trace will be laid for the track currently being placed. Combine this with the “route from other end of track” (hotkey E), this is a blazing productivity boost, especially when you’re working across two different ball out grids.
Version 9 enabled buses/multiple tracks to be draggable, meaning this whole system could go even faster still.
Many times when starting a design my placements aren't set and that has a huge impact on routing.
Honestly, I’d trust an autorouter to complete a good chunk of my designs if I can get to a placement I’m satisfied with, and I can give the autorouter some constraints on where to route. For example: did a board last year using an NXP iMX8MP with an eMMC. The peripheral ballout on the processor matches the eMMC ballout really nicely - was just a matter of lining the chips up and drawing the lines. An autorouter could have done in seconds what it took me ~10 minutes if it had known to keep everything in the data bus on the top layer.
I think that’s a success criteria autorouter projects suffer from: they don’t consider their autorouter “done” unless their router can do _everything_ on a board. As a working EE, I don’t actually want that. I want an autorouter that can work with me to do a satisfactory job on a little chunk of the design at a time, give me some time to review it for correctness, then move on to the next chunk.
That would be killer. Especially if you could give constraints beyond layers: I.e “keep all nets with names D0-7 on layer one and three, and match their length to 5mm of one another. Use D0 as the net length reference.” If you can do that, then boom: you’ve solved DRAM length tuning. Huge range of design complexity now within reach of the unwashed masses.
OP - if I can find some time to show you what I mean, I’d be more than happy to give you a demo of what I’m talking about.
seveibar
Hey cushychicken! Would love to see what you’re working on and thanks for explaining (seveibar at tscircuit dot com is my email if you want to set something up!)
I do think humans need to be in the loop on autorouters, but I also think autorouters should be very fast and decent. In web design we progressively add constraints (via CSS) to tweak the output of the visual design and see the result instantly, I believe this same workflow is possible for PCB design (and will favor those good at specifying constraints!)
cushychicken
Thanks for the reply. I’ll get in touch via email in the next few days.
I don’t think the goals of “human in the loop” and “fast and decent” are mutually exclusive, or really even at odds with each other.
I just think that most autorouters I’ve tried seem to have been written by CS people who thought PCB layout was an interesting problem to solve - without a particularly deep understanding of what working EEs care about in terms of how you route the board.
Case in point: plenty of autorouters seem to work on the paradigm of “all tracks routed? Great. I’m done.” Never mind that everything is two layers of spaghetti: no ground plane, no power planes, no regard for SI or EMC compliance. In other words: pure hell to test and certify.
Not trying to be crotchety EE by saying this - I can give a good example of what I mean in real time. I also feel like I’m a bit of the exception in the EE community in that I think this is a solvable problem. I just don’t think many CS people have really bothered to consult any real working EEs about their workflows and the downstream portion of test and certification.
rjsw
I once had to do bringup on an autorouted prototype PCB. The traces between the CPU and DRAM looped three times round the board.
GistNoesis
Point 8, the fast dismissal of Monte-Carlo method is a huge blunder.
The point of Monte-Carlo method is that you can tradeoff accuracy for speed. The longer you run your algorithm the more accurate you get.
But what's even more interesting is that we can often use the contrapositive : You can get shitty accurate result real fast.
Instead of exploring all paths, you explore only one path picked at random.
And that's where it shines : when you put it into the most imbricated loop of your algorithm.
For example when you want to learn a neural network which autoroute. Typically you would have the outerloop which is updating the neural network parameter, and the inner-loop which is computing a path through the graph.
When you use Monte-Carlo this inner-loop which control accuracy can be reduced to 1 iteration if everything is bias-less, you will have a slower outerloop due to higher variance but the machine learning should """theoretically""" learn.
This is what allows you to build policies which intuitively always pick the right decision. Like in chess or go, you have some monte-carlo tree search variant like alpha-go-zero or alpha-chess-zero or alpha-router-zero, where even without the search part, the giant cache (encoded by the neural network parameters), once trained, can compute your best guess path in one pass through your neural network aka constant time (with a constant which can easily trade memory for speed by augmenting the number of parameters or training for longer).
ur-whale
> Point 8, the fast dismissal of Monte-Carlo method is a huge blunder.
Yeah, I had the exact same reaction when I read the article.
MC is the "keeping it real" algorithm, slow, but almost always dead simple to implement and that can be trusted with super high certainty to double-check that you've not completely wandered off in the weeds.
bee_rider
Which is sort of funny given the intuitive description of Monte Carlo—a random wander, possibly into the weeds.
throwaway-blaze
I was this many days old when I learned the word "imbricated".
GistNoesis
Yeah you're right, I should have said nested (it is a valid Anglicism from French "imbriqué(es)" )
burnished
Nothing wrong with inadvertently teaching folk new words, it was an uncommon and welcome surprise for me
ttyprintk
Yet the author mentioned simulated annealing, so was probably not even trying a neural net, since SA does not compute a gradient.
ttul
Simulated annealing has long been one of the classic metaheuristic methods applied to autorouting in circuit design. It was widely used in early research and some commercial tools because its probabilistic approach helps escape local minima in these NP‐hard problems. Today, while many modern autorouters in industry tend to favor faster or more specialized hybrid methods, simulated annealing remains a common benchmark in academic studies and research prototypes for routing and related tasks like floorplanning.
Source: I once wrote a simulated annealer for VLSI wiring. It was cool.
opwieurposiu
Simulated annealing works great for placing map labels. Lets say you have to label POI with flags, you do not want the flags to overlap and you can only place the labels NSEW of the POI.
1. Plop the labels down in random* directions.
2. Calculate the total overlap.
3. Randomly* move 10% of the labels.
4. Calculate the total overlap again, If the overlap is improved, keep your changes, If things got worse, then backtrack.
5. Sometimes you randomly keep changes that make things worse, this is the annealing part. This is how you escape local minima.
* The randomness does not have to be 100%, you can try smartly avoiding nearest neighbors 2/3 of the time and then do a random move 1/3 of the time. If you always try and be smart you will tend get trapped in a local minima.
Animats
That's a great discussion of autorouting. Then he ends with: "key piece to enable the “vibe-building” of electronics." Ouch
Routing itself is easy. It's when the router has to tear up stuff it already routed to fit new stuff in that things get complicated and the combinatorics start to get you.
I miss the autorouter KiCAD used to have. It was taken out for iffy IP reasons (the author had worked for an autorouting company). Reaction to users who wanted it back were along the lines of "Real Men Don't Use Autorouters".[1]
[1] https://forum.kicad.info/t/autorouting-and-autoplacement/185...
seveibar
Haha I feel like the right reaction to "vibe-*" is to cringe. I cringe a little bit every time I see someone promoting a vibe-coded app at the moment, but I think back to how I got started in coding (I was constantly annoying people on old ActionScript forums to fix my code) and I see so much potential in people being able to get started quickly in any domain. I hope that our autorouter (and others that follow!) will similarly allow people to ship their first electronics without needing tons of guidance or formal education.
That said a good autorouter should also be useful to professionals! So hopefully we help with that as well!
bsder
I wish these folks well and hope that their autorouter gets folded into KiCad.
However, as one of the cranky old people who don't really want to see KiCad expend any energy on autorouters, PCB autorouters are a pain in the ass that never work.
We can look at VLSI autorouters to determine why that is. VLSI autorouters also were a pain in the ass that never worked. But what happened was that VLSI suddenly got lots of layers and you could dedicate a layer to routing vertically, a layer to routing horizontally, a layer to routing power and still have a couple of layers for global vertical interconnect, global horizontal interconnect, and global power.
The fundamental problem with PCB autorouting is that PCBs have WAY more obstructions than VLSI chips do. First, components themselves are obstructions and choke points. Second, PCB vias almost always obstruct all the layers of the board while VLSI vias only obstruct the two layers being connected. Third, PCB vias tend to be bigger than your interconnect metal width. Fourth, the number of PCB layers in use is way smaller than the number of layers in VLSI--the most common numbers of layers are 4 layers (most projects--of which only 2 are really used for general routing), 2 layers (because of cost engineering--good luck autorouting these) and 6 (a tiny minority).
It all adds up to PCB autorouting being a significantly more complicated job than VLSI autorouting.
grues-dinner
> 6 (a tiny minority)
I don't think that's true. Perhaps by number of PCBs made 2 and 4 layers dominate: all those IoT doohickeys and alarm clock displays. And even single layer phenolic boards. And for most hobbyist work with little MCUs, 4 layers is a sweet spot. But they're usually either very simple devices where routing is not the problem, or they have very tight application constraints where the placement has to be squeezed and squeezed.
But much of the effort in designing PCBs in industry is on 6+ layers. You can probably smash out a simple smart lightswitch PCB in a day. Boards with BGA SoCs, DDR, PCIe and FPGAs can take whole teams months or more and have incredibly complicated constraints, many of which are implicit (the very simplest: put cap near pin, test points inline, make diff pair via symmetrical, keep this SMPS far from sensitive things, and make the inductor loop as tiny as you can). There are a million ways to make a PCB pass DRC and still be a completely non-functional device. In particular, routing is secondary in terms of effort and importance to placement.
If you sample what a random PCB engineer is working on, it's quite likely to be lots of layers, or an extremely tight layout, and probably both. Or something weird and application-dependent like high voltage. And they're probably fiddling with placement at the random sample time you choose.
Toy examples of sparse layouts like mechanical keyboards and DIP ICs are very unrepresentative of where the real effort, and money, goes.
Animats
KiCAD must be moving up in the world if people are using it for 6-layer boards. Or Altium, at US$5,500/year, is now too expensive even for pros.
I'd thought of KiCAD as a hobbyist tool. It didn't have the intensive library support of a pro tool. Footprints and 3D models were user submissions and not well curated or tested. Footprints might need some manual tweaking. With a pro tool, you're paying for a small army of people doing data entry on part info. Has KiCAD improved in that area?
seveibar
Author here. Lots of great points being made. I want to throw in a crazy prediction.
I think routing is basically an image transformer problem without a good dataset right now. If the eye of the giant AI companies turned to autorouting and large synthetic but physically simulated circuit datasets were created, we would basically be done and autorouting would be a solved problem.
This means that all the work I’m doing now on heuristic algorithms, as well as all the hard work done by humans, will probably not be needed in the future. I just don’t see autorouting as being more difficult (in terms of solution space size) than the art being produced by transformer models right now.
I’m saying this because you’re right, these heuristic algorithms can only get us so far- the problem is really difficult. But our human intuition, the magic black box operation we do, doesn’t seem to be too far beyond the emerging transformer image models.
theamk
The major difference is in PCB, every single track has to abide by the rules, no exceptions are allowed if you want your board to work.
While AI-generated art is full of small defects which people are just ignoring, who cares about non-natural shadows or unrealistically large fingers.
It is possible to iteratively combine AI with DRC checker and loop until all is good, but it's not obvious to me that this would be performant enough, or if this system will stay in some sort of local minimum forever once the circuit is complex enough.
paulgerhardt
In agreement.
I think maybe the best way to get this data set is to subsidize a few dozen electronics recycling centers for every unique microCT scan they send you. Lease them the tomography equipment. They increase their bottom line, you get a huge dataset of good-to-excellent quality commercial PCB designs.
n4r9
Article makes some important points - especially re visualisation and cache effects.
Some quibbles or glossovers:
> A recursive algorithm is a depth first search. Any loop that explores candidates/neighbors without sorting the candidates is a BFS.
Not sure what to say about this. It's either wrong or I'm missing the intuition. Both DFS and BFS can either be written iteratively or recursively; the real difference is whether you pop your next candidate from the top or the bottom of the stack. Equivalently, whether you use a stack (FILO) or a queue (FIFO). That said, I took maths rather than CS so verify this for yourself.
> It is simply the best foundation for any kind of informed search (not just for 2d grids!)
A* is useful in pathfinding if you have some notion of easily-computed "distance" to your desired target and you're only running a few queries for any given graph. If you plan to run many queries on a mostly-static graph (like a road network) then you could be better off applying a pre-processing algorithm like contraction heirarchies. If you're optimising but don't know what target you're aiming for (e.g. Traveling Salesman) then using some other local search heuristic like 2-opt may be better.
> The major difference between these [A* and BFS] is BFS explores all adjacent nodes, while A* prioritizes exploring nodes that are closer to the destination.
This is definitely a difference. But it glosses over the biggest difference, which is that A* is a dynamic algorithm. That allows you to terminate early with confidence that you have found the shortest path. With BFS you may not be certain until you've searched the whole graph, which could be vast.
hansvm
> recursive == DFS
The intuition there is that people tend to write algorithms recursively when they easily map to interactions with the top of the stack, since that's easier to express in most languages than bringing in an external stack to think about. Hence, if you see something recursive in the wild it likely looks more like DFS than BFS. As you point out, that isn't a hard rule.
thequux
Indeed, BFS, DFS, and A* are the same algorithm just with a different data structure to track unexplored nodes. BFS uses a FIFO queue, DFS uses a LIFO stack, and A* uses a priority queue (generally a heap)
o11c
A* is also importantly different for what the ordering of the priority queue is. Dijkstra's algorithm is just BFS with a priority queue, but the lack of heuristic in the comparison key keeps it clearly breadth-first.
n4r9
Yeah. And A* simply generalises Dijkstra's algorithm by adding a distance heuristic to the priority expression.
Karliss
No you don't have to search whole graph with BFS (it might happen if source and target is on opposite corners), first time you reach a node you know 100% that it is the shortest path. That's one of basic invariants allowing BFS to produce correct result, you can terminate early when all targets are reached. A difference between A* and BFS is that instead of searching shortest path between two points BFS finds shortest path from single point to ALL points in graph. A* makes a tradeof of speeding up individual query by answering to weaker question. When problem allows it replacing thousands of A* calls by single BFS or Dijkstra call can give a major speedup. Another important difference is that BFS only works on graphs where all edges have same length, A* supports mixed edge lengths. They are not interchangeable, just like finding minimum element in a list isn't replacement for sorting the list.
o11c
This is wrong or at least misleading in a couple of ways:
A* is not limited to point-to-point, it also handles point-to-set-of-points just fine (you just might have to reverse the direction you're thinking in). Although this is most often used for "we only need a path to one of them", it's still faster even if you need a path to all of them due to merging. Visiting all points (as BFS does) on the graph is rarely what you actually want (though admittedly, if you're able to throw away the grid it becomes more likely; like the article, I do find people use grids far too often).
BFS works just fine with variable edge weights, you just have to use a priority queue instead of a plain queue for the next set of nodes to be visited.
Karliss
BFS with priority queue isn't BFS it's called Dijkstra's algorithm.
Are you sure it works correctly for shortest path to all points in set not just "we need one of them" case? When running in reverse I would expect closest point to be priotized first, which would potentially mark all the area around target as visited thus blocking paths from other points in set from reaching it. It's equivalent to introducing one more point in graph and connecting it to all the points in set. Unless it works for one to all in set it's a weaker result than what Dijkstra or BFS answers. If your problem doesn't need it, don't use it. But that's my point use the weakest algorithm which directly answers required query, and be aware that building more powerful algorithm by repeatedly calling more specialized but weaker one won't always be optimal.
roetlich
> BFS works just fine with variable edge weights, you just have to use a priority queue instead of a plain queue for the next set of nodes to be visited.
But usually you don't call it BFS if it's using a priority queue?
n4r9
> first time you reach a node you know 100% that it is the shortest path
As you note later in your comment, this is only true in the case of uniform edge weights. In general you can come up with easy counterexamples e.g. a very long direct edge from source to target vs lots of short edges which sum to a smaller amount.
Karliss
If you have mixed edges you don't use BFS you use Dijkstra's algorithm preferably with a heap and the example you said is still not a problem. The short path made of many edges would be pulled out of heap first ensuring first time visit is still the shortest. If you run actual BFS on non uniform edges, for something like a->b a->c b->d c->d d->e it won't ecesseryn even produce the shortest path at all, only way i can imagine it working is to drop the requirement of not visiting single node twice for it to produce shortest path. At that point it's not BFS at all, and not only you would have to visit whole graph you would have to visit whole graph multiple times, worst case expontially if you have something like braching chain of paths.
null
dkjaudyeqooe
> The QuadTree and every general-purpose tree data structure are insanely slow. Trees are not an informed representation of your data.
> Any time you’re using a tree you’re ignoring an O(~1) hash algorithm for a more complicated O(log(N)) algorithm
This is incredibly misguided. The hashing approach is fine if your points are evenly distributed and you only really want to query an area relatively close to the fixed subdivisions you've chosen, otherwise your 0(1) is going to degenerate to O(n).
Trees are an informed representation when you don't know how your data is distributed.
Similar story for randomized algorithms. What happens when your search space is many trillions of items or possibilities? Or there are no heuristics to be had? If you can't brute force it and you can't use a clever algorithm then randomized algorithms are a savior.
Maybe these aren't needed for this specific application, but best to not make sweeping statements.
rtpg
Measure measure measure measure. Each case is different.
But more seriously I think tree based algos tend to be overhyped, and people get a bit absorbed into thinking about big-O behavior when the constant factors are super important even when you’re looking at hundreds of thousands of elements. Not to mention stuff like data locality. Like sometimes your computer will be faster at just looking in a seq scan rather than dealing with the bookkeeping of a fancier structure. Sometimes.
I think a nicer argument overall is to build a tiny wrapper around your operations, build out what is easy, and then figure it out through measurements.
Worst case you end up entirely rewriting your program to accommodate a different structure to try for better perf but in my experience rewriting entire files from scratch tends to get you a good number of improvements for free.
dkjaudyeqooe
Sequential scans are terribly underrated given today's architectures. If the datum is small and it's like 1000 elements max it's going to be hard to beat a simple scan of an array for speed and memory efficiency.
But trees et al also make a lot of sense as a default. If you need the performance to scale in a roughly linear manner you need to be smart about it. This is what sinks a lot of software, usability wise.
seveibar
Author here. This was almost worth a mention in the article. Every once in a while I'd have a friend take a look at a piece of my algorithm and they'd suggest swapping out a sort() function for a priority queue etc. (without looking at the performance data) and find no change or sometimes even a slightly worse result! This is why having performance data is so critical, the performance of a complex algorithm can be counter-intuitive and most performance "gotchas" are just straight up mistakes or recalculations rather than fundamental algorithm issues (remember the GTA5 bug that caused the loading screen to take an extra 5 seconds, and was simple enough for a random dev to patch? that is the most common type of performance issue in development!)
phkahler
For 3D I found octtrees very effective and fast. In my implementation you can even move items around without having to regenerate the tree.
I have yet to find a satisfactory way to store points (in 2d or 3d) and be able to query nearby points. a kD tree is nice, but I want to add points as I go, not build a structure around a fixed set.
o11c
I suspect that thinking in trie terms is the way to go, where each node chooses its preferred representation.
The top level should usually use a grid, though uniform-density children can also use one. The grid scale should be chosen to maximize low-density children while minimizing empty children.
For low-density children, just store an unsorted bunch of items, or maybe sort them on just one axis (preferably the outermost axis used for rendering). If your outer level is actually uniform, all of its children will be this (or empty).
For high-density children, use a quadtree/octree ... or just treat them as opaque objects to be handled recursively. These still suck for all the memory-walking they do, but since you've handled the outer part using the grid, the overhead is smaller. These can do "nearby" queries if your implementation is good, but many implementations suck. Making them mutable just means you need to either implement rebalancing or just stick to fixed planes.
MrLeap
Almost everything matches my gamedev heuristics. I even have empathy for choosing JS. I'm making a game modding framework right now that operates on a lispy kind of s-expressions. Optimizing to accelerate creative iteration time > *, I've found.
A*, Lee's algorithm and the like are all cool. It's criminal to write any kind of floodfill without having an accompanying visualization, you're squandering so much dopamine.
This article has me wondering if all the gamedev things I *didn't* read about but are adjacent have utility in this kind of thing. I can't be the first person to think a boids router would be pretty fun. More seriously, I bet jumpflooding signed distance fields would provide you a lot of power.
Everything about spatial hashing in particular matches my experience. Haven't found many occurences in almost 2 decades where any of the tree structures are worth the time. One notable exception. The lovecraftian text editor I made uses quite a lot of trie's for reactivity things. Nice way to compress 45,000 words into a compact state machine for event handling.
seveibar
It is a really fun idea to build a boids router (shelving that for a future article!) I wrote previously about recursive pattern autorouters which are really good at having small solution spaces (and therefore, easier to get conventional machine learning algorithms to predict). There are so many interesting unexplored areas in autorouting!!
I hadn't heard of jumpflooding (for others: fast, parallel algorithm for approximating distance fields), that could definitely be interesting, thanks for the tip!
shadowgovt
I think the trees were a lot more useful in the past when memory and caches were smaller (and I suspect they can still be useful for precomputation, though I'd have to sit down and benchmark fixed-grid-with-smart-sizing vs. tree). Trees are also amendable to recursive algorithms but the author has noted that they have reasons to choose iterative over recursive algorithms, so these pieces of advice synergize.
(It is perhaps worth noting: broadly speaking, "recursive" vs. "non-recursive" is a bit of an artificial distinction. The real question is "does a pre-baked algorithm with rigid rules handle flow control, or do you?" If you care about performance a lot, you want the answer to be you, so having your run state abstracted away into an execution-environment-provided stack that you can't easily mutate weirdly at runtime begins to get in your way).
amiga386
> 95% of your focus should be on reducing the number of iterations. This is why language doesn’t matter.
And then once you've used the playful, expressive (interpreted, abstract, slow) language you enjoy using, to develop an excellent, performant algorithm... if performance still matters, write the same thing again in a performant, low-level language, and perhaps even write architecture-specific assembly.
There's a reason numpy, pandas, OpenCV, TensorFlow aren't written in pure Python, but instead let Python tell them to do things they've implemented in high performance C++/assembly/CUDA, etc.
No matter how proud the authors would be of exploring a problem space and coming up with an efficient algorithm (and blogging about it), I doubt they'd be popular numerical computing libraries if they insisted on it being written in pure Python, or Javascript.
While this is a fun blog post, I don't think it'd have the same takeaways if the author's algorithmic insights got a pure-Javascript HEVC encoder down from 1 day per frame to 3 hours per frame...
teleforce
>I believe that solving autorouting will be a massive unlock for physical-world innovation and is a key piece to enable the “vibe-building” of electronics.
I strongly believe that the CAD world including EDA is at the verge of disruption by an AI or more correctly Machine Intelligence (Constraint Programming - CP to be exact) very similar to how LLM disrupting the Chatbot technology [1],[2].
The path to this is most probably by solving the autorouting mechanism with CP, a deterministic logic, optimization and constraint programming machine based intelligence [3], [4], [5],[6].
Fun facts, the modern founder of logic, optimization, and constraint programming is George Boole, the grandfather of Geoffrey Everest Hinton, the "Godfather of AI" and "Godfather of Deep Learning".
[1] Most AI value will come from broad automation, not from R & D (313 comments):
https://news.ycombinator.com/item?id=43447616
[2] Diagrams AI can, and cannot, generate (68 comments):
https://news.ycombinator.com/item?id=43398434
[3] Constraint programming:
https://en.wikipedia.org/wiki/Constraint_programming
[4] Logic, Optimization, and Constraint Programming: A Fruitful Collaboration - John Hooker - CMU (2023) [video]:
https://www.youtube.com/live/TknN8fCQvRk
[5] "We Really Don't Know How to Compute!" - Gerald Sussman - MIT (2011) [video]:
https://youtube.com/watch?v=HB5TrK7A4pI
[6] High-Quality Ultra-Compact Grid Layout of Grouped Networks [PDF]:
https://ialab.it.monash.edu/~dwyer/papers/gridlayout2015.pdf
knodi123
Man, look at all those keywords I remember from college. I wish I got to use fancy well-known algorithms. Instead all I'm doing is building UI components and REST APIs to present elasticsearch results. All the fun stuff is buried in black boxes.
seveibar
Algorithms are a lot more fun now that LLMs have all the geometric heuristics memorized. I think a lot of algorithms are just inescapable in game development, so if you're itching to make an algorithm making something like like a tower defense game has a lot of classic algorithms involved!
tlarkworthy
Llms do not have it memorized. Try asking it to calculate the signed angle between two signed planes. I gave up and told it to copy the unity implementation.
Animats
Right. There was a time when I had a well-worn copy of Knuth, vol. 1 at hand. No longer.
mschuster91
The core problem is a severe mismatch between academic curricula and what the job market actually needs on one side and the fact that companies use "needs a college degree" as a proxy to weed out risk and bypass ADA/anti discrimination laws. Both are a massive drain on the economy.
IMHO, at the very least the current CS degree needs to be split up - CS should be split into various smaller (and faster achievable) subdegrees - the fancy math stuff should be its own degree, possibly be fused with a new degree relating to AI, database and network theory should be its own degree, and frankly stuff like low-level assembler as well, and the "how do electronic components, NAND gates, boolean logic and whatnot work" moved off to electronics engineering. And what the market actually needs the most - people who can crank out CRUD app stuff - should be either its own degree if one insists that this needs academic knowledge, or be moved off to something like trades education.
In parallel, the job requirements gatekeeping should be tackled by laws - companies must no longer be allowed to require degrees that have little to no impact on the actual job duties. It's forcing our children to waste years of their life and take on five to six figures worth of debt, all only for companies to be able to weed out people.
nine_k
This is great. As someone not working directly on 2D / 3D space problems, my biggest take-away is the value of visualizing things. Humans are really good at grasping and analyzing pictures. Another is the idea to use a stochastic or brute-force method to understand the shape of the problem, so to say, and choose a better method based on that, not from a theoretic understanding alone.
aranchelk
> 2. Implementation Language doesn’t matter
> I’m controversially writing our autorouter in Javascript. This is the first thing people call out, but it’s not as unreasonable as you might expect. Consider that when optimizing an algorithm, you’re basically looking at improving two things:
> Lowering the number of iterations required (make the algorithm smart) Increasing the speed of each iteration
It may be true in this domain, I wouldn’t know, but applied to software engineering in general IMO it would be a massively incorrect assumption to say choice of language doesn’t affect speed and needed number of iterations.
bigiain
I think there's a fair argument to be made that when you're chasing Big O algorithmic improvements, then the effective constant terms incurred by "faster or slower language execution" are premature optimisation. The difference between Rust or hardcoded assembler compared to Javascript or VisualBasic are pretty irrelevant if you're still trying to get your exponent or polynomial terms under control.
windward
The argument falls apart when the premise that you're chasing big O does.
Poor cache/memory/disk accesses can result in constant regressions so vast that a 'worse' algorithm actually fares better. Also, we tend to falsely settle on referring to 'O' rather than omega, theta, and average, even when we don't care about worse-case or contrived workloads. See quicksort and mergesort.
For a similar concept in another domain, see also the external memory model:
HdS84
Yep. Also, js is easy to iterate on - it's more lenient than rust or say c#. Especially if you are exploring thinks, that's a huge boon. Obviously, for the same algorithm, compiled languages will be slower. Does it matter? Maybe. But time to find this algorithm is also critical - and if js helps here, it's a good choice.
kalaksi
But what about afterwards? You move to a more performant language or just accept that you're now invested in JS?
andrewaylett
One or the other, probably — Facebook were sufficiently invested in PHP that they released a whole new VM for the language. In Python, one would relatively commonly extract the performance-critical sections into C (or, nowadays, Rust) and you could do the same with Node too. Or the JIT might be fast enough that it's not worthwhile.
imtringued
I think Javascript might doom the autorouter to small scale designs or very long processing times, but I have never used tscircuit so maybe I'm wrong.
seveibar
Believe in the cache! There's a reason most websites can be written in javascript now and not have major issues, the I/O bottleneck is more important than the raw computation. As long as the algorithm is cacheable (and this means each stage maintains partial/spatial caches!) the implementation language shouldn't be very important.
There are specific NP Hard problems that aren't spatially cacheable and we may need WASM for, but these problems tend to be nearly impossible for humans as well, so PCB designers will opt to just add some space on the board or add additional layers.
shadowgovt
I love this article. Two things I'd consider highlighting:
1. Autrouting in particular is a problem well-suited to visualization and not all problems are. it's fundamentally about real things in real space (bonus: they're mostly constrained to a 2D plane, which makes them easy to visualize on a computer screen). A lot of hard and interesting problems don't match that reality and so aren't as amenable to visualization.
(... but the general advice, "look for opportunities to visualize," stands. Your eyes and brain are very fast matchers for a broad variety of patterns in a way a computer isn't without a lot of special-purpose, tuned software).
2. JavaScript, between the language itself (specifically the ability to reflect data structures at runtime) and the ecosystem around it, is very amenable to visualization. In that way specifically, it was a strong choice for this problem domain. Imagine how much work you'd be taking on if you decided to "just bang out a quick visualization" for an arbitrary piece of the algorithm in C++, or Rust.
I'm generally in the "never trust the autorouter" camp (and by extension "never trust the bevy of AI tools entering the space"), but it's undeniable there are some big opportunities in the eCAD space to speed up some aspects of layout. I'm probably more likely to use co-creation tools, rather than full auto tools, simply because I rely heavily on iteration. Many times when starting a design my placements aren't set and that has a huge impact on routing. I didn't see on your page whether placement was part of your algorithm. I already rely on tools like push and shove and occasionally auto complete.
I'm always curious about people entering the space though. It is a small market, IMO. The tools are fractured, the existing players are lumbering behemoths, and the users are cranky zealots (you will have to pry KiCad out of my cold dead hands). I noted the step about the AR being written in JavaScript, which I have no real opinion on, but I'm curious about plans to plug into ecosystems (CAD vendors, OS tools) or try and draw people into another new ecosystem.