Show HN: Refine – A Local Alternative to Grammarly
refine.sh
How I build software quickly
evanhahn.com
Let's Learn x86-64 Assembly (2020)
gpfault.net
Show HN: Ten years of running every day, visualized
nodaysoff.run
Apple's Browser Engine Ban Persists, Even Under the DMA
open-web-advocacy.org
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
arxiv.org
A Century of Quantum Mechanics
home.cern
OpenCut: The open-source CapCut alternative
github.com
Binding Application in Idris
andrevidela.com
The underground cathedral protecting Tokyo from floods (2018)
bbc.com
APKLab: Android Reverse-Engineering Workbench for VS Code
github.com
A technical look at Iran's internet shutdowns
zola.ink
Show HN: Built a desktop app to organize photos locally with duplicate detection
organizer.flipfocus.nl
Telefónica DE shifts VMware support to Spinnaker due to cost
theregister.com
Hypercapitalism and the AI talent wars
blog.johnluttig.com
Show HN: FFmpeg in plain English – LLM-assisted FFmpeg in the browser
vidmix.app
Show HN: ArchGW – An intelligent edge and service proxy for agents
github.com
Burning a Magnesium NeXT Cube (1993)
simson.net
Concurrent Programming with Harmony
harmony.cs.cornell.edu
Myanmar’s proliferating scam centers
asia.nikkei.com
The Scourge of Arial (2001)
marksimonson.com
The upcoming GPT-3 moment for RL
mechanize.work
GLP-1s are breaking life insurance
glp1digest.com
I can see Gary's point here. He got some stick for this on x, but Gary seems to be right here. One curious thing about this is how both sides have vacated their original positions: scaling people were all about scaling and now they do talk about RL as the next scaling curve but code tool call is an accepted paradigm. Symbolic group was more about symbols first and learning later (marcus himself in 2001 - structured representations are the only route to systematicity).
Code Interpreter + o3 is neurosymbolic AI. The architecture is very similar to a cognitive science flowchart (perception net -> symbolic scratch pad -> controller loop). How we got there is through gradient descent and not brittle expert written rules.