Kimi Linear: An Expressive, Efficient Attention Architecture
github.com
Ground stop at JFK due to staffing
fly.faa.gov
Phone numbers for use in TV shows, films and creative works
acma.gov.au
The ear does not do a Fourier transform (2024)
dissonances.blog
Jack Kerouac, Malcolm Cowley, and the difficult birth of On the Road
theamericanscholar.org
ICE and the Smartphone Panopticon
newyorker.com
NPM flooded with malicious packages downloaded more than 86k times
arstechnica.com
Springs and bounces in native CSS
joshwcomeau.com
The Psychology of Portnoy: On the Making of Philip Roth's Groundbreaking Novel
lithub.com
Free software scares normal people
danieldelaney.net
Israel demanded Google and Amazon use secret 'wink' to sidestep legal orders
theguardian.com
Show HN: I made a heatmap diff viewer for code reviews
0github.com
Minecraft HDL, an HDL for Redstone
github.com
Denmark reportedly withdraws Chat Control proposal following controversy
therecord.media
Show HN: Status of my favorite bike share stations
blog.alexboden.ca
Show HN: Quibbler – A critic for your coding agent that learns what you want
github.com
Roadmap for Improving the Type Checker
forums.swift.org
Launch HN: Propolis (YC X25) – Browser agents that QA your web app autonomously
app.propolis.tech
PlanetScale Offering $5 Databases
planetscale.com
A change of address led to our Wise accounts being shut down
shaun.nz
Hi HN! Creator here. I built Story Keeper to solve a problem I kept hitting with AI agents: they remember everything but lose coherence over long conversations. The Core Idea Instead of storing chat history and retrieving chunks (RAG approach), Story Keeper maintains a living narrative:
Characters: Who you are (evolving), who the agent is Arc: Where you started → where you're going Themes: What matters to you Context: The thread connecting everything
Think of it as the difference between reading meeting notes vs. being in the relationship. Technical Approach ~200 lines of Python. Three primitives:
Story State (not message list) Story Evolution (not appending) Story-Grounded Response (not retrieval)
Works with any LLM - tested with GPT-4, Claude, Llama 3.1, Mistral. Why This Works Traditional memory is about facts. Story Keeper is about continuity. Example: Health coaching agent
Normal: Generic advice each time Story Keeper: "This is the pattern we identified last month. You do better with 'good enough' than perfect."
The agent carries forward understanding, not just data. Implementation Part of PACT-AX (open source agent collaboration framework). MIT licensed. Simple integration: pythonfrom pact_ax.primitives.story_keeper import StoryKeeper
keeper = StoryKeeper(agent_id="my-agent") response = keeper.process_turn(user_message) Use Cases I'm Exploring
Long-term coaching/mentorship Multi-session research assistants Customer support with relationship continuity Educational tutors that understand learning journeys
What I'd Love Feedback On
Is this solving a real problem or am I overthinking it? Performance concerns at scale? Other approaches people have tried for this? Use cases I'm missing?
The full technical writeup is in the repo blog folder. Happy to answer questions!