Skip to content(if available)orjump to list(if available)

AI-designed chips are so weird that 'humans cannot understand them'

rkagerer

In a sense, Adrian Thompson kicked this off in the 90's when he applied an evolutionary algorithm to FPGA hardware. Using a "survival of the fittest" approach, he taught a board to discern the difference between a 1kHz and 10KHz tone.

The final generation of the circuit was more compact than anything a human engineer would ever come up with (reducible to a mere 37 logic gates), and utilized all kinds of physical nuances specific to the chip it evolved on - including feedback loops, EMI effects between unconnected logic units, and (if I recall) operating transistors outside their saturation region.

Article: https://www.damninteresting.com/on-the-origin-of-circuits/

Paper: https://www.researchgate.net/publication/2737441_An_Evolved_...

Reddit: https://www.reddit.com/r/MachineLearning/comments/2t5ozk/wha...

dang

Related. Others?

The origin of circuits (2007) - https://news.ycombinator.com/item?id=18099226 - Sept 2018 (25 comments)

On the Origin of Circuits: GA Exploits FPGA Batch to Solve Problem - https://news.ycombinator.com/item?id=17134600 - May 2018 (1 comment)

On the Origin of Circuits (2007) - https://news.ycombinator.com/item?id=9885558 - July 2015 (12 comments)

An evolved circuit, intrinsic in silicon, entwined with physics (1996) - https://news.ycombinator.com/item?id=8923902 - Jan 2015 (1 comment)

On the Origin of Circuits (2007) - https://news.ycombinator.com/item?id=8890167 - Jan 2015 (1 comment)

That's not a lot of discussion—we should have another thread about this sometime. If you want to submit it in (say) a week or two, email hn@ycombinator.com and we'll put it in the second-chance pool (https://news.ycombinator.com/pool, explained at https://news.ycombinator.com/item?id=26998308), so it will get a random placement on HN's front page.

viccis

I really wish I still had the link, but there used to be a website that listed a bunch of times in which machine learning was used (mostly via reinforcement learning) to teach a computer how to play a video game and it ended up using perverse strategies that no human would do. Like exploiting weird glitches (https://www.youtube.com/watch?v=meE5aaRJ0Zs shows this with Q*bert)

Closest I've found to the old list I used to go to is this: https://heystacks.com/doc/186/specification-gaming-examples-...

robertjpayne

Make no mistake most humans will exploit any glitches and bugs they can find for personal advantage in game. It’s just machines can exploit timing bugs better.

quanto

Fascinating paper. Thanks for the ref.

Operating transistors outside the linear region (the saturated "on") on a billion+ scale is something that we as engineers and physicists haven't quite figured out, and I am hoping that this changes in future, especially with the advent of analog neuromorphic computing. The quadratic region (before the "on") is far more energy efficient and the non-linearity could actually help with computing, not unlike the activation function in an NN.

Of course, the modeling the nonlinear behavior is difficult. My prof would say for every coefficient in SPICE's transistor models, someone dedicated his entire PhD (and there are a lot of these coefficients!).

I haven't been in touch with the field since I moved up the stack (numerical analysis/ML) I would love to learn more if there has been recent progress in this field.

ImHereToVote

I believe neuromorphic spiking hardware will be the step to truly revolutionize the field of anthropod contagion issues.

breatheoften

I remember this paper being discussed in the novel "Science of Discworld" -- a super interesting book involving collaboration between a fiction author and some real world scientists -- where the fictional characters in the novel discover our universe and its rules ... I always thought there was some deep insight to be had about the universe within this paper. Now moreso I think the unexpectedness says something instead about the nature of engineering and control and human mechanisms for understanding these sorts of systems ... -- sort of by definition human engineering relies on linearized approximations to characterize the effects being manipulated -- so something which operates in modes far outside those models is basically inscrutable. I think that's kind of expected but the results still provoke the fascination to ponder the solutions super human engineering methods might yet find with the modern technical substrates.

hiAndrewQuinn

I remember talking about this with my friend and fellow EE grad Connor a few years ago. The chip's design really feels like a biological approach to electrical engineering, in the way that all of the layers we humans like to neatly organize our concepts into just get totally upended and messed with.

pharrington

Biology also uses tons of redundancy and error correction that the generative algorithm approach lacks.

Terr_

IIRC the flip-side was that it was hideously specific to a particular model and batch of hardware, because it relied on something that would otherwise be considered a manufacturing flaw.

svilen_dobrev

long time ago, maybe in russian journal "Radio" ~198x, there was someone there describing that if one gets certain transistor from particular batch of particular factory/date, and connect it in whatever weird way, will make a full FM radio (or similar-complex-thing).. because they've wronged the yields. No idea how they had figured that out.

But mistakes aside, what would it be if the chips from the factory could learn / fine-tune how to work (better) , on the run..

cgcrob

Relying on nuances of the abstraction and undefined or variable characteristics sounds like a very very bad idea to me.

The one thing you generally want for circuits is reproducibility.

alexpotato

I read the damn interesting post back when it came out and seeing the title of the post immediately led me to thinking of Thompson's post as well.

mikewarot

I've only started to look into the complexities involved in chip design (for my BitGrid hobby horse project) but I've noticed that in the Nature article, all of the discussion is based on simulation, not an actual chip.

Let's see how well that chip does if made by the fab. (I doubt they'd actually make it, likely there are a thousand design rule checks it would fail)

If you paid them to over-ride the rules at make it anyway, I'd like to see if it turned out to be anything other than a short-circuit from Power to Ground.

valine

I strongly dislike when people say AI when they actually mean optimizer. Calling the product of an optimizer “AI” is more defensible, you optimized an MLP and now it writes poetry. Fine. Is the chip itself the AI here? That’s the product of the optimizer. Or is it the 200 lines of code that defines a reward and iterates the traces?

catlifeonmars

Yesterday I used a novel AI technology known as “llvm” to remove dead code paths from my compiled programs.

LPisGood

Optimization is near and dear to my heart (see username), but I think it’s fine to call optimization processes AI because they are in the classical sense.

satvikpendem

Sigh, another day, another post I must copy paste my bookmarked Wikipedia entry for:

> "The AI effect" refers to a phenomenon where either the definition of AI or the concept of intelligence is adjusted to exclude capabilities that AI systems have mastered. This often manifests as tasks that AI can now perform successfully no longer being considered part of AI, or as the notion of intelligence itself being redefined to exclude AI achievements.[4][2][1] Edward Geist credits John McCarthy for coining the term "AI effect" to describe this phenomenon.[4]

> McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked."[5] It is an example of moving the goalposts.[6]

> Tesler's Theorem is:

> AI is whatever hasn't been done yet.

> — Larry Tesler

https://en.wikipedia.org/wiki/AI_effect

taberiand

We'll never build true AI, just reach some point where we prove humans aren't really all that intelligent either

trollbridge

If you want grant and VC dollars, you’ll rebrand things as “AI”.

scotty79

The chip is not called AI-chip but rather AI-designed chip. At least in the title.

valine

My point is that it’s equally ridiculous to call either AI. If our chip here is not the AI then the AI has to be the optimizer. By extension that means AdamW is more of an AI than ChatGPT.

ulonglongman

I don't understand. I learnt about optimizers, and genetic algorithms in my AI courses. There are lots of different things we call AI, from classical AI (algorithms for discrete and continuous search, planning, sat, Bayesian stuff, decision trees, etc.) to more contemporary deep learning, transformers, genAI etc. AI is a very very broad category of topics.

rowanG077

I don't understand what your gripe is. Both are AI. Even rudimentary decision trees are AI.

janice1999

Is this really so novel? Engineers have been using evolutionary algorithms to create antennas and other components since the early 2000s at least. I remember watching a FOSDEM presentation on an 'evolved' DSP for radios in the 2010s.

https://en.wikipedia.org/wiki/Evolved_antenna

happytoexplain

I don't believe it's comparable. Yes, we've used algorithms to find "weird shapes that work" for a long time, but they've always been very testable. AI is being used for more complex constructs that have exponentially-exponentially greater testable surface area (like programs and microarch).

null

[deleted]

xanderlewis

This is really interesting and I’m surprised I’ve never even heard of it before.

Now I’m imagining antennas breeding and producing cute little baby antennas that (provided they’re healthy enough) survive to go on to produce more baby antennas with similar characteristics, and so on…

It’s a weird feeling to look at that NASA spacecraft antenna, knowing that it’s the product of an evolutionary process in the genuine, usual sense. It’s the closest we can get to looking at an alien. For now.

jhot

Two antennas get married. The wedding was ok but the reception was great!

pmlnr

    These are highly complicated pieces of equipment almost as complicated as living organisms.
    ln some cases, they've been designed by other computers.
    We don't know exactly how they work.
Westworld, 1973

NitpickLawyer

> AI models have, within hours, created more efficient wireless chips through deep learning, but it is unclear how their 'randomly shaped' designs were produced.

IIRC this was also tried at NASA, they used some "classic" genetic algorithm to create the "perfect" antenna for some applications, and it looked unlike anything previously designed by engineers, but it outperformed the "normal" shapes. Cool to see deep learning applied to chip design as well.

Frenchgeek

Wasn't there an GA FPGA design to distinguish two tones that was so weird and specific not only did it use capacitance for part on its work but literally couldn't work on another chip of the same model?

isoprophlex

Yes, indeed, although the exact reference escapes me for the moment.

What I found absolutely amazing when reading about this, is that this is exactly how I always imagined things in nature evolving.

Biology is mostly just messy physics where everything happens at the same time across many levels of time and space, and a complex system that has evolved naturally appears to always contain these super weird specific cross-functional hacks that somehow end up working super well towards some goal

alexpotato

> Yes, indeed, although the exact reference escapes me for the moment.

It's mentioned in a sister comment: https://www.damninteresting.com/on-the-origin-of-circuits/

actionfromafar

I think it was that or a similar test where it would not even run on another part, just the single part it was evolved on.

robotresearcher

Yes. The work of Adrian Thompson at the University of Sussex.

https://scholar.google.com/citations?user=5UOUU7MAAAAJ&hl=en

zahlman

If we can't understand the designs, how rigorously can we really test them for correctness?

molticrystal

Our human designs strive to work in many environmental conditions. Many early AI designs, if iterated in the real world, would incorporate local physical conditions into their circuits. For example, that fluorescent lamp or fan I'm picking up(from the AI/evolutionary design algorithm's perspective) has great EM waves that could serve as a reliable clock source, eliminating the need for my own. Thus if you move things it would break.

I am sure there are analogous problems in the digital simulation domain. Without thorough oversight and testing through multiple power cycles, it's difficult to predict how well the circuit will function, and how incorporating feedback into the program will affect its direction, if not careful, causing the aforementioned strange problems.

Although the article mentions corrections to the designs, what may be truly needed is more constraints. The better we define these constraints, the more likely correctness will emerge on its own.

skissane

> Our human designs strive to work in many environmental conditions. Many early AI designs, if iterated in the real world, would incorporate local physical conditions into their circuits. For example, that fluorescent lamp or fan I'm picking up(from the AI/evolutionary design algorithm's perspective) has great EM waves that could serve as a reliable clock source, eliminating the need for my own. Thus if you move things it would break.

This problem may have a relatively simple fix: have two FPGAs – from different manufacturing lots, maybe even different models or brands – each in a different physical location, maybe even on different continents. If the AI or evolutionary algorithm has to evolve something that works on both FPGAs, it will naturally avoid purely local stuff which works on one and not the other, and produce a much more general solution.

PhilipRoman

Ask the same "AI" to create a machine readable proof of correctness. Or even better - start from an inefficient but known to be working system, and only let the "AI" apply correctness-preserving transformations.

djmips

Especially true if the computer design creates a highly coupled device that could be process sensitive.

42lux

Results?

evrimoztamur

Can you always test the entire input space? Only for a few applications.

42lux

I am really curious about how you test software...

choxi

Maybe we’re all just in someone’s evolutionary chip designer

awinter-py

> The AI also considers each chip as a single artifact, rather than a collection of existing elements that need to be combined. This means that established chip design templates, the ones that no one understands but probably hide inefficiencies, are cast aside.

there should be a word for this process of making components efficiently work together, like 'optimization' for example

DrNosferatu

Its inevitable: software (and other systems) will also become like this.

satvikpendem

I've been using Cursor, it already is. I've found myself becoming merely a tester of the software rather than a writer of it, the more I use this IDE.

DrNosferatu

It’s a bit clunky still IMHO. Or you found a good tutorial to leverage it fully?

codr7

And then it's pretty much game over.

DrNosferatu

It’s better we [democracies] ride and control the AI change of paradigm than just let someone else do it for us.

pessimizer

"Democracy" is just a chant now. It's supposed to somehow happen without votes, privacy, freedom of expression, or freedom of association.

mwkaufma

"In particular, many of the designs produced by the algorithm did not work"