Privacy and Security Risks in the eSIM Ecosystem [pdf]
usenix.org
DSM Disorders Disappear in Statistical Clustering of Psychiatric Symptoms (2024)
psychiatrymargins.com
The US Is Tracking 14 Potential Rabies Outbreaks in 20 States
accuweather.com
Sj.h: A tiny little JSON parsing library in ~150 lines of C99
github.com
Why is Venus hell and Earth an Eden?
quantamagazine.org
Simulating a Machine from the 80s
rmazur.io
South Korea's President says US investment demands would spark financial crisis
cnbc.com
How I, a beginner developer, read the tutorial you, a developer, wrote for me
anniemueller.com
Lightweight, highly accurate line and paragraph detection
arxiv.org
40k-Year-Old Symbols in Caves Worldwide May Be the Earliest Written Language
openculture.com
Pointer Tagging in C++: The Art of Packing Bits into a Pointer
vectrx.substack.com
DXGI debugging: Microsoft put me on a list
slugcat.systems
How can I influence others without manipulating them?
andiroberts.com
Calculator Forensics (2002)
rskey.org
My new Git utility `what-changed-twice` needs a new name
blog.plover.com
Why your outdoorsy friend suddenly has a gummy bear power bank
theverge.com
Show HN: I wrote an OS in 1000 lines of Zig
github.com
Procedural Island Generation (VI)
brashandplucky.com
RCA VideoDisc's Legacy: Scanning Capacitance Microscope
spectrum.ieee.org
I forced myself to spend a week in Instagram instead of Xcode
pixelpusher.club
Timesketch: Collaborative forensic timeline analysis
github.com
Show HN: Tips to stay safe from NPM supply chain attacks
github.com
Node 20 will be deprecated on GitHub Actions runners
github.blog
Hi HN,
I’ve been working on Wan-Animate, a tool that brings static characters to life through motion transfer and holistic replication.
Key features include:
Animate static characters by transferring movements and expressions from a reference video
Seamless character replacement with consistent gestures, expressions, and style
Video generation up to 120 seconds in 480p or 720p
Accurate lip–audio alignment and realistic expression transfer
Multimodal instruction control using video, image, and text prompts
The goal is to make character animation feel natural and adaptable across open scenarios, not just limited templates.
I’d love feedback from the HN community on:
Features you’d like in an AI-powered animation tool
Use cases where motion transfer could be most impactful (games, avatars, education, etc.)
Thoughts on scalability, ethics, and creative applications
Thanks for reading — looking forward to your thoughts!