Claude can now search the web
anthropic.com
Retro Boy: simple Game Boy emulator written in Rust, can be played on the web
github.com
McLaren Invented New Carbon Fiber Tape to Build Even More Complex Parts
thedrive.com
Zero-knowledge proofs, encoding Sudoku and Mario speedruns without semantic leak
vasekrozhon.wordpress.com
Next generation LEDs are cheap and sustainable
liu.se
Build a Container Image from Scratch
danishpraka.sh
The Last Drops of Mexico City
mexicocitywater.longlead.com
Oxygen discovered in most distant known galaxy
eso.org
The Pain That Is GitHub Actions
feldera.com
Debugging PostgreSQL More Easily
cybertec-postgresql.com
Powers of 2 with all even digits
oeis.org
Minding the gaps: A new way to draw separators in CSS
blogs.windows.com
Going from an Idea to MVP in Weeks: PromptPanda's Launch(es)
docs.opensaas.sh
Nonprofit's Leader Convicted of Siphoning Off $240M in Federal Food Aid
nytimes.com
Grease: An Open-Source Tool for Uncovering Hidden Vulnerabilities in Binary Code
galois.com
Understanding Solar Energy
construction-physics.com
Show HN: Minimalytics – a standalone minimal analytics app built on SQLite
github.com
Hexagons and Beyond: Flexible, Responsive Grid Patterns, Sans Media Queries (20
css-tricks.com
Pump.co (YC S22) Is Hiring
ycombinator.com
I think the key takeaway quotes are these:
“The amount of inference compute needed is already 100x more” than it was when large language models started out, Huang said on last month’s earnings call. “And that’s just the beginning.”
The cost of serving up responses from LLMs has fallen rapidly over the past two years, driven by a combination of more powerful chips, more efficient AI systems and intense competition between AI developers such as Google, OpenAI and Anthropic.