I failed to recreate the 1996 Space Jam website with Claude
284 comments
·December 7, 2025wilsmex
thecr0w
Thanks, my friend. I added a strike through of the error, a correction, and credited you.
I'm keeping it in for now because people have made some good jokes about the mistake in the comments and I want to keep that context.
manbash
Ah, those days, where you would slice your designs and export them to tables.
chrisweekly
I remember building really complex layouts w nested tables, and learning the hard way that going beyond 6 levels of nesting caused serious rendering performance problems in Netscape.
JimDabell
I remember seeing a co-worker stuck on trying to debug Netscape showing a blank page. When I looked at it, it wasn’t showing a blank page per se, it was just taking over a minute to render tables nested twelve deep. I deleted exactly half of them with no change to the layout or functionality, and it immediately started rendering in under a second.
shomp
Six nesting levels for tables? Cool, what were you making?
thecr0w
I learned recently that this is still how a lot of email html get generated.
mananaysiempre
Apparently Outlook (the actual one, not the recent pretender) still uses some ancient WordHTML version as the renderer, so there isn’t much choice.
ricardonunez
Oh yeah, recently I had to update a newsletter design like that and older versions of outlook still didn’t render properly.
gregoryl
Gosh, there was a website, where you submit a PSD + payment, and they spit out a sliced design. Initially tables, later, CSS. Life saver.
Brajeshwar
Y Combinator funded one such company, MarkupWand.[1] A friend is one of the co-founders.
johnebgd
I cut my teeth developing for the web using GoLive and will never forget how they used tables to layout a page from that tool…
thuttinger
Claude/LLMs in general are still pretty bad at the intricate details of layouts and visual things. There are a lot of problems that are easy to get right for a junior web dev but impossible for an LLM. On the other hand, I was able to write a C program that added gamma color profile support to linux compositors that don't support it (in my case Hyprland) within a few minutes! A - for me - seemingly hard task, which would have taken me at least a day or more if I didn't let Claude write the code. With one prompt Claude generated C code that compiled on first try that:
- Read an .icc file from disk
- parsed the file and extracted the VCGT (video card gamma table)
- wrote the VCGT to the video card for a specified display via amdgpu driver APIs
The only thing I had to fix was the ICC parsing, where it would parse header strings in the wrong byte-order (they are big-endian).
jacquesm
Claude didn't write that code. Someone else did and Claude took that code without credit to the original author(s), adapted it to your use case and then presented it as its own creation to you and you accepted this. If a human did this we probably would have a word for them.
mlinsey
Certainly if a human wrote code that solved this problem, and a second human copied and tweaked it slightly for their use case, we would have a word for them.
Would we use the same word if two different humans wrote code that solved two different problems, but one part of each problem was somewhat analogous to a different aspect of a third human's problem, and the third human took inspiration from those parts of both solutions to create code that solved a third problem?
What if it were ten different humans writing ten different-but-related pieces of code, and an eleventh human piecing them together? What if it were 1,000 different humans?
I think "plagiarism", "inspiration", and just "learning from" fall on some continuous spectrum. There are clear differences when you zoom out, but they are in degree, and it's hard to set a hard boundary. The key is just to make sure we have laws and norms that provide sufficient incentive for new ideas to continue to be created.
whatshisface
They key difference between plagarism and building on someone's work is whether you say, "this based on code by linsey at github.com/socialnorms" or "here, let me write that for you."
nitwit005
Ask for something like "a first person shooter using software rendering", and search github for the function names for the rendering functions. Using Copilot I found code simply lifted from implementations of Doom, except that "int" was replaced with "int32_t" and similar.
It's also fun to tell Copilot that the code will violate a license. It will seemingly always tell you it's fine. Safe legal advice.
nextos
In case of LLMs, due to RAG, very often it's not just learning but almost direct real-time plagiarism from concrete sources.
bsaul
That's an interesting hypothesis : that LLM are fundamentally unable to produce original code.
Do you have papers to back this up ? That was also my reaction when i saw some really crazy accurate comments on some vibe coded piece of code, but i couldn't prove it, and thinking about it now i think my intuition was wrong (ie : LLMs do produce original complex code).
jacquesm
We can solve that question in an intuitive way: if human input is not what is driving the output then it would be sufficient to present it with a fraction of the current inputs, say everything up to 1970 and have it generate all of the input data from 1970 onwards as output.
If that does not work then the moment you introduce AI you cap their capabilities unless humans continue to create original works to feed the AI. The conclusion - to me, at least - is that these pieces of software regurgitate their inputs, they are effectively whitewashing plagiarism, or, alternatively, their ability to generate new content is capped by some arbitrary limit relative to the inputs.
fpoling
Pick up a book about programming from seventies or eighties that was unlikely to be scanned and feed into LLM. Take a task from it and ask LLM to write a program from it that even a student can solve within 10 minutes. If the problem was not really published before, LLM fails spectacularly.
_heimdall
I have a very anecdotal, but interesting, counterexample.
I recently asked Gemini 3 Pro to create an RSS feed reader type of experience by using XSLT to style and layout an OPML file. I specifically wanted it to use a server-side proxy for CORS, pass through caching headers in the proxy to leverage standard HTTP caching, and I needed all feed entries for any feed in the OPML to be combined into a single chronological feed.
It initially told multiple times that it wasn't possible (it also reminded me that Google is getting rid of XSLT). Regardless, after reiterating that it is possible multiple times it finally decided to make a temporary POC. That POC worked on the first try, with only one follow up to standardize date formatting with support for Atom and RSS.
I obviously can't say the code was novel, though I would be a bit surprised if it trained on that task enough for it to remember roughly the full implementation and still claimed it was impossible.
martin-t
The whole "reproduces training data vebatim" is a red herring.
It reproduces _patterns from the training data_, sometimes including verbatim phrases.
The work (to discover those patterns, to figure out what works and what does not, to debug some obscure heisenbug and write a blog post about it, ...) was done by humans. Those humans should be compensated for their work, not owners of mega-corporations who found a loophole in copyright.
moron4hire
No, the thing needing proof is the novel idea: that LLMs can produce original code.
ekropotin
> If a human did this we probably would have a word for them.
What do you mean? The programmers work is literally combining the existing patterns into solutions for problems.
Mtinie
> If a human did this we probably would have a word for them.
I don’t think it’s fair to call someone who used Stack Overflow to find a similar answer with samples of code to copy to their project an asshole.
jacquesm
Who brought Stack Overflow up? Stack Overflow does not magically generate code, someone has to actually provide it first.
bluedino
It has been for the last 15 years.
sublinear
Using stack overflow recklessly is definitely asshole behavior.
Aeolun
Software engineer? You think I cite all the code I’ve ever seen before when I reproduce it? That I even remember where it comes from?
ineedasername
>we probably would have a word for them
Student? Good learner? Pretty much what everyone does can be boiled down to reading lots of other code that’s been written and adapting it to a use case. Sure, to some extent models are regurgitating memorized information, but for many tasks they’re regurgitating a learned method of doing something and backfilling the specifics as needed— the memorization has been generalized.
fooker
> If a human did this we probably would have a word for them.
Humans do this all the time.
FeepingCreature
This is not how LLMs work.
littlecranky67
> Claude/LLMs in general are still pretty bad at the intricate details of layouts and visual things
Because the rendered output (pixels, not HTML/CSS) is not fed as data in the training. You will find tons of UI snippets and questions, but they rarely included screenshots. And if they do, the are not scraped.
Wowfunhappy
Interesting thought. I wonder if Anthropic et al could include some sort of render-html-to-screenshot as part of the training routine, such that the rendered output would get included as training data.
btown
Even better, a tool that can tell the rendered bounding box of any set of elements, and what the distances between pairs of elements are, so it can make adjustments if relative positioning doesn't match its expectation. This would be incredible for SVG generation for diagrams, too.
KaiserPro
thats basically a VLM, but the problem is that describing the world requires a better understanding of the world. Hence why LeCunn is talking about world models (Its also cutting edge for teaching robots to manipulate and plan manipulations)
ubercow13
Why wouldn't they be?
chongli
Why is this something a Wayland compositor (a glorified window manager) needs to worry about? Apple figured this out back in the 1990s with ColorSync and they did it once for the Mac OS and any application that wanted colour management could use the ColorSync APIs.
hedgehog
Color management infrastructure is intricate. To grossly simplify: somehow you need to connect together the profile and LUT for each display, upload the LUTs to the display controller, and provide appropriate profile data for each window to their respective processes. During compositing then convert buffers that don't already match the output (unmanaged applications will probably be treated as sRGB, color managed graphics apps will opt out of conversion and do whatever is correct for their purpose).
chongli
Yes, but why is the compositor dealing with this? Shouldn't the compositor simply be deciding which windows go where (X, Y, and Z positions) and leave the rendering to another API? Why does every different take on a window manager need to re-do all this work?
smoghat
Ok, so here is an interesting case where Claude was almost good enough, but not quite. But I’ve been amusing myself by taking abandoned Mac OS programs from 20 years ago that I find on GitHub and bringing them up to date to work on Apple silicon. For example, jpegview, which was a very fast and simple slideshow viewer. It took about three iterations with Claude code before I had it working. Then it was time to fix some problems, add some features like playing videos, a new layout, and so on. I may be the only person in the world left who wants this app, but well, that was fine for a day long project that cooked in a window with some prompts from me while I did other stuff. I’ll probably tackle scantailor advanced next to clean up some terrible book scans. Again, I have real things to do with my time, but each of these mini projects just requires me to have a browser window open to a Claude code instance while I work on more attention demanding tasks.
skrebbel
> Ok, so here is an interesting case where Claude was almost good enough, but not quite.
You say that as if that’s uncommon.
jonplackett
This should be the strap line for all AI (so far)
smoghat
That's fair. But I always think of it as an intern I am paying $20 a month for or $200 a month. I would be kind of shocked if they could do everything as well as I'd hoped for that price point. It's fascinating for me and worth the money.
I am lucky that I don't depend on this for work at a corporation. I'd be pulling my hair out if some boss said "You are going to be doing 8 times as much work using our corporate AI from now on."
mr_windfrog
Maybe we could try asking Claude to generate code using <table>, <tr>, <td> for layout instead of relying on div + CSS. Feels like it could simplify things a lot.
Would this actually work, or am I missing something?
thecr0w
I think it probably gets you 80% but the last 20% of pixel perfection seems to evade Claude. But I'm pretty new to writing prompts so if you can nail it let me know and I'll link you in the post.
charcircuit
>I'd like to preserve this website forever and there's no other way to do it besides getting Claude to recreate it from a screenshot.
There are other ways such as downloading an archive and the preserving the file in one or more cloud storages.
sqircles
> The Space Jam website is simple: a single HTML page, absolute positioning for every element...
Absolute positioning wasn't available until CSS2 in 1998. This is just a table with crafty use of align, valign, colspan, and rowspan.
thecr0w
Thanks, my friend. I added a strike through of the error, a correction, and credited you.
I'm keeping it in for now because have made some good jokes about the mistake in the comments and I want to keep that context.
DocTomoe
Which would also render differently on every machine, based on browser settings, screen sizes, and available fonts.
Like the web was meant to be. An interpreted hypertext format, not a pixel-perfect brochure for marketing execs.
masswerk
Hum, table cells provide the max-width and images a min-with, heights are absolute (with table cells spilling over, as with CCS "overflow-y: visible"), aligns and maybe HSPACE and VSPACE attributes do the rest. As long as images heights exceed the effective line-height and there's no visible text, this should render pixel perfect on any browser then in use. In this case, there's also an absolute width set for the entire table, adding further constraints. Table layouts can be elastic, with constraints or without, but this one should be pretty stable.
(Fun fact, the most amazing layout foot-guns, then: Effective font sizes and line-heights are subject to platform and configuration (e.g., Win vs Mac); Netscape does paragraph spacing at 1.2em, IE at 1em (if this matters, prefer `<br>` over paragraphs); frames dimensions in Netscape are always calculated as integer percentages of window dimensions, even if you provide absolute dimensions in pixels, while IE does what it says on the tin (a rare example), so they will be the same only by chance and effective rounding errors. And, of course, screen gamma is different on Win and Mac, so your colors will always be messed up – aim for a happy medium.)
jeanlucas
>Like the web was meant to be.
what?
sigseg1v
Curious if you've tested something such as:
- "First, calculate the orbital radius. To do this accurately, measure the average diameter of each planet, p, and the average distance from the center of the image to the outer edge of the planets, x, and calculate the orbital radius r = x - p"
- "Next, write a unit test script that we will run that reads the rendered page and confirms that each planet is on the orbital radius. If a planet is not, output the difference you must shift it by to make the test pass. Use this feedback until all planets are perfectly aligned."
Aurornis
This is my experience with using LLMs for complex tasks: If you're lucky they'll figure it out from a simple description, but to get most things done the way you expect requires a lot of explicit direction, test creation, iteration, and tokens.
One of the keys to being productive with LLMs is learning how to recognize when it's going to take much more effort to babysit the LLM into getting the right result as opposed to simply doing the work yourself.
jazzyjackson
Re: tokens, there is a point where you have to decide what's worth it to you. I'd been unimpressed with what I could get out of chat apps but when I wanted to do a rails app that would cost me thousands in developer time and some weeks between communication zoom meetings and iteration... I bit the bullet and kept topping up Claude API and spent about $500 on Opus over the course of a weekend, but the site is done and works great.
thecr0w
Hm, I didn't try exactly this, but I probably should!
Wrt unit test script, let's take Claude out of the equation, how would you design the unit test? I kept running into either Claude or some library not being capable of consistently identifying planet vs non planet which was hindering Claude's ability to make decisions based on fine detail or "pixel coordinates" if that makes sense.
cfbradford
Do you give Claude the screenshot as a file? If so I’d just ask it to write a tool to diff each asset to every possible location in the source image to find the most likely position of each asset. You don’t really need recognition if you can brute force the search. As a human this is roughly what I would do if you told me I needed to recreate something like that with pixel perfect precision.
thecr0w
Ok! will give it a shot. In a few iterations I gave him screenshots, i have given him the ability to take screenshots, and I gave him the Playwright MCP. I kind of gave up on the path you're suggesting (though I didn't get super far along) because I felt like I would run into this problem eventually of needing a model to figure out what a planet is, where the edge of the planet is, etc.
But if that could be done deterministically, I totally agree this is the way to go. I'll put some more time into it over the next couple weeks.
bluedino
Congratulations, we finally created 'plain English' programming languages. It only took 1/10th of the worlds electricity and 40% of the semiconductor production.
turnsout
Yes, this is a key step when working with an agent—if they're able to check their work, they can iterate pretty quickly. If you're in the loop, something is wrong.
That said, I love this project. haha
monsieurbanana
I'm trying to understand why this comment got downvoted. My best guess is that "if you're in the loop, something is wrong" is interpreted as there should be no human involvement at all.
The loop here, imo, refers to the feedback loop. And it's true that ideally there should be no human involvement there. A tight feedback loop is as important for llms as it is for humans. The more automated you make it, the better.
turnsout
Yes, maybe I goofed on the phrasing. If you're in the feedback loop, something is wrong. Obviously a human should be "in the loop" in the sense that they're aware of and reviewing what the agent is doing.
manlymuppet
Couldn’t you just feed Claude all the raw, inspect element HTML from the website and have it “decrypt” that?
The entire website is fairly small so this seems feasible.
Usually there’s a big difference between a website’s final code and its source code because of post processing but that seems like a totally solvable Claude problem.
Sure LLMs aren’t great with images, but it’s not like the person who originally wrote the Space Jam website was meticulously messing around with positioning from a reference image to create a circular orbit — they just used the tools they had to create an acceptable result. Claude can do the same.
Perhaps the best method is to re-create, rather than replicate the design.
blks
What do you mean? Raw html is the original website source code.
Modern web development completely poisoned young generation
manlymuppet
I'm using source code like it's used when referring to source code vs executables. React doesn't simply spit out HTML, nor the JSX used to write said React code, it outputs a mixture of things that's the optimized HTML/CSS/JS version of the React you wrote. This is akin to source code and the optimized binaries we actually use.
Perhaps the wrong usage of "source code". I probably should've been more precise. Forgive my lack of vocabulary to describe the difference I was referring to.
sailfast
There were no binaries or packages. You wrote the HTML in notepad or maybe you used some "high speed IDE" with syntax highlighting and some buttons like Dreamweaver and then uploaded it via FTP to whatever server you were hosting it on. No muss, no fuss. It was a glorious time and I miss that internet a lot.
pastel8739
For a website from 1996 though, there’s a very good chance that the page source is the source code
personjerry
If you have the raw HTML why would you need to do this at all?
manlymuppet
I should've been more precise with my words.
What I meant is doing inspect element on the Space Jam website, and doing select all + copy.
futuraperdita
I think you're assuming a pattern existed in 1996 that didn't actually exist until the 2010s.
In 1996 JavaScript was extremely limited; even server side processing was often limited to CGI scripts. There was nothing like React that was in common use at the time. The Space Jam website was almost certainly not dynamically compiled as HTML - it existed and was served as a static set of files.
Even a decade later, React and the frontend-framework sort of thinking wasn't really a big thing. People had started to make lots of things with "DHTML" in the early 2000s where JavaScript was used to make things spicier (pretty animations, some server side loading with AJAX) and still often worked without JS enabled in a pattern called graceful degradation.
What you'd get from "View Source", or "Inspect Element", and what was literally saved on disk of spacejam.com, was almost certainly the same content.
manlymuppet
https://pastebin.com/raw/F2jxZTeJ
The HTML I'm referring to, copied from the website.
Only about 7,000 characters or just 2,000 Claude tokens. This is feasible.
literalAardvark
The space jam website used HTML tables for formatting and split images in each cell.
CSS didn't exist.
999900000999
Space Jam website design as an LLM benchmark.
This article is a bit negative. Claude gets close , it just can't get the order right which is something OP can manually fix.
I prefer GitHub Copilot because it's cheaper and integrates with GitHub directly. I'll have times where it'll get it right, and times when I have to try 3 or 4 times.
GeoAtreides
>which is something OP can manually fix
what if the LLM gets something wrong that the operator (a junior dev perhaps) doesn't even know it's wrong? that's the main issue: if it fails here, it will fail with other things, in not such obvious ways.
godelski
I think that's the main problem with them. It is hard to figure out when they're wrong.
As the post shows, you can't trust them when they think they solved something but you also can't trust them when they think they haven't[0]. The things are optimized for human preference, which ultimately results in this being optimized to hide mistakes. After all, we can't penalize mistakes in training when we don't know the mistakes are mistakes. The de facto bias is that we prefer mistakes that we don't know are mistakes than mistakes that we do[1].
Personally I think a well designed tool makes errors obvious. As a tool user that's what I want and makes tool use effective. But LLMs flip this on the head, making errors difficult to detect. Which is incredibly problematic.
[0] I frequently see this in a thing it thinks is a problem but actually isn't, which makes steering more difficult.
[1] Yes, conceptually unknown unknowns are worse. But you can't measure unknown unknowns, they are indistinguishable from knowns. So you always optimize deception (along with other things) when you don't have clear objective truths (most situations).
alickz
>what if the LLM gets something wrong that the operator (a junior dev perhaps) doesn't even know it's wrong?
the same thing that always happens if a dev gets something wrong without even knowing it's wrong - either code review/QA catches it, or the user does, and a ticket is created
>if it fails here, it will fail with other things, in not such obvious ways.
is infallibility a realistic expectation of a software tool or its operator?
smallnix
That's not the point of the article. It's about Claude/LLM being overconfident in recreating pixel perfect.
jacquesm
All AI's are overconfident. It's impressive what they can do, but it is at the same time extremely unimpressive what they can't do while passing it off as the best thing since sliced bread. 'Perfect! Now I see the problem.'. 'Thank you for correcting that, here is a perfect recreation of problem 'x' that will work with your hardware.' (never mind the 10 glaring mistakes).
I've tried these tools a number of times and spent a good bit of effort on learning to maximize the return. By the time you know what prompt to write you've solved the problem yourself.
thecr0w
ya, this is true. Another commenter also pointed out that my intention was to one-shot. I didn't really go too deeply into trying to try multiple iterations.
This is also fairly contrived, you know? It's not a realistic limitation to rebuild HTML from a screenshot because of course if I have the website loaded I can just download the HTML.
swatcoder
> rebuild HTML from a screenshot
???
This is precisely the workflow when a traditional graphic designer mocks up a web/app design, which still happens all the time.
They sketch a design in something like Photoshop or Illustrator, because they're fluent in these tools and many have been using them for decades, and somebody else is tasked with figuring out how to slice and encode that design in the target interactive tech (HTML+CSS, SwiftUI, QT, etc).
Large companies, design agencies, and consultancies with tech-first design teams have a different workflow, because they intentionally staff graphic designers with a tighter specialization/preparedness, but that's a much smaller share of the web and software development space than you may think.
There's nothing contrived at all about this test and it's a really great demonstration of how tools like Claude don't take naturally to this important task yet.
thecr0w
You know, you're totally right and I didn't even think about that.
Retric
It’s not unrealistic to want to revert to an early version of something you only have a screenshot of.
bigstrat2003
> it just can't get the order right which is something OP can manually fix.
If the tool needs you to check up on it and fix its work, it's a bad tool.
markbao
“Bad” seems extreme. The only way to pass the litmus test you’ve described is for a tool to be 100% perfect, so then the graph looks like 99.99% “bad tool” until it reaches 100% perfection.
It’s not that binary imo. It can still be extremely useful and save a ton of time if it does 90% of the work and you fix the last 10%. Hardly a bad tool.
It’s only a bad tool if you spent more time fixing the results than building it yourself, which sometimes used to be the case for LLMs but is happening less and less as they get more capable.
a4isms
If you show me a tool that does a thing perfectly 99% of the time, I will stop checking it eventually. Now let me ask you: How do you feel about the people who manage the security for your bank using that tool? And eventually overlooking a security exploit?
I agree that there are domains for which 90% good is very, very useful. But 99% isn't always better. In some limited domains, it's actually worse.
godelski
I wouldn't go that far, but I do believe good tool design tries to make its failure modes obvious. I like to think of it similar to encryption: hard to do, easy to verify.
All tools have failure modes and truthfully you always have to check the tool's work (which is your work). But being a master craftsman is knowing all the nuances behind your tools, where they work, and more importantly where they don't work.
That said, I think that also highlights the issue with LLMs and most AI. Their failure modes are inconsistent and difficult to verify. Even with agents and unit tests you still have to verify and it isn't easy. Most software bugs are created from subtle things, often which compound. Which both those things are the greatest weaknesses of LLMs: nuance and compounding effects.
So I still think they aren't great tools, but I do think they can be useful. But that also doesn't mean it isn't common for people to use them well outside the bounds of where they are generally useful. It'll be fine a lot of times, but the problem is that it is like an alcohol fire[0]; you don't know what's on fire because it is invisible. Which, after all, isn't that the hardest part of programming? Figuring out where the fire is?
mrweasel
That's my thinking. If I need to check up on the work, then I'm equally capable of writing the code myself. It might go faster with an LLM assisting me, and that feels perfectly fine. My issue is when people use the AI tools to generate something far beyond their own capabilities. In those cases, who checks the result?
wvenable
Perfection is the enemy of good.
Wowfunhappy
Claude is not very good at using screenshots. The model may technically be multi-modal, but its strength is clearly in reading text. I'm not surprised it failed here.
fnordpiglet
Especially since it decomposes the image into a semantic vector space rather than the actual grid of pixels. Once the image is transformed into patch embeddings all sense of pixels is entirely destroyed. The author demonstrates a profound lack of understanding for how multimodal LLMs function that a simple query of one would elucidate immediately.
The right way to handle this is not to build it grids and whatnot, which all get blown away by the embedding encoding but to instruct it to build image processing tools of its own and to mandate their use in constructing the coordinates required and computing the eccentricity of the pattern etc in code and language space. Doing it this way you can even get it to write assertive tests comparing the original layout to the final among various image processing metrics. This would assuredly work better, take far less time, be more stable on iteration, and fits neatly into how a multimodal agentic programming tool actually functions.
mcbuilder
Yeah, this is exactly what I was thinking. LLMs don't have precise geometrical reasoning from images. Having an intuition of how the models work is actually.a defining skill in "prompt engineering"
thecr0w
Yeah, still trying to build my intuition. Experiments/investigations like this help me. Any other blogs or experiments you'd suggest?
thecr0w
Great, thanks for that suggestion!
dcanelhas
Even with text, parsing content in 2D seems to be a challenge for every LLM I have interacted with. Try getting a chatbot to make an ascii-art circle with a specific radius and you'll see what I mean.
Wowfunhappy
I don't really consider ASCII art to be text. It requires a completely different type of reasoning. A blind person can be understand text if it's read out loud. A blind person really can't understand ASCII art if it's read out loud.
null
soared
I got quite close with Gemini 3 pro in AI studio. I uploaded a screenshot (no assets) and the results were similar to OP. It failed to follow my fix initially but I told it to follow my directions (lol) and it came quite close (though portrait mode distorted it, landscape was close to perfect.
“Reference the original uploaded image. Between each image in the clock face, create lines to each other image. Measure each line. Now follow that same process on the app we’ve created, and adjust the locations of each image until all measurements align exactly.”
https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...
buchwald
Claude is surprisingly bad at visual understanding. I did a similar thing to OP where I wanted Claude to visually iterate on Storybook components. I found outsourcing the visual check to Playwright in vision mode (as opposed to using the default a11y tree) and Codex for understanding worked best. But overall the idea of a visual inspection loop went nowhere. I blogged about it here: https://solbach.xyz/ai-agent-accessibility-browser-use/
MagMueller
Interesting read. Agree that GUI is super hard for agents. Did you see "skills" from browser-use? We directly interact with network requests now.
Well this was interesting. As someone who was actually building similar website in the late 90's I threw this into the Opus 4.5. Note the original author is wrong about the original site however:
"The Space Jam website is simple: a single HTML page, absolute positioning for every element, and a tiling starfield GIF background.".
This is not true, the site is built using tables, not positioning at all, CSS wasn't a thing back then...
Here was its one-shot attempt at building the same type of layout (table based) with a screenshot and assets as input: https://i.imgur.com/fhdOLwP.png