AirPods libreated from Apple's ecosystem
github.com
IDEmacs: A Visual Studio Code clone for Emacs
codeberg.org
Our investigation into the suspicious pressure on Archive.today
adguard-dns.io
Blocking LLM crawlers without JavaScript
owl.is
libwifi: an 802.11 frame parsing and generation library written in C
libwifi.so
When did people favor composition over inheritance?
sicpers.info
Things that aren't doing the thing
strangestloop.io
The inconceivable types of Rust: How to make self-borrows safe (2024)
blog.polybdenum.com
When UPS charged me a $684 tariff on $355 of vintage computer parts
oldvcr.blogspot.com
Boa: A standard-conforming embeddable JavaScript engine written in Rust
github.com
Transgenerational Epigenetic Inheritance: the story of learned avoidance
elifesciences.org
Computing Across America (1983-1985)
microship.com
Show HN: Unflip – a puzzle game about XOR patterns of squares
unflipgame.com
EyesOff: How I built a screen contact detection model
ym2132.github.io
Archimedes – A Python toolkit for hardware engineering
pinetreelabs.github.io
Linux on the Fujitsu Lifebook U729
borretti.me
JVM exceptions are weird: a decompiler perspective
purplesyringa.moe
Why export templates would be useful in C++ (2010)
warp.povusers.org
I made a better DOM morphing algorithm
joel.drapper.me
Report: Tim Cook could step down as Apple CEO 'as soon as next year'
9to5mac.com
TCP, the workhorse of the internet
cefboud.com
Nevada Governor's office covered up Boring Co safety violations
fortune.com
Weighting an average to minimize variance
johndcook.com
So much text and not a single example, diagram, or demo.
I'm honestly skeptical this will work at all, the FOV of most webcams is so small that it can barely capture the shoulder of someone sitting beside me, let alone their eyes.
Then what you're basically looking for is callibration from the eye position / angle to the screen rectangle. You want to shoot a ray from each eye and see if they intersect with the laptop's screen.
This is challenging because most webcams are pretty low resolution, so each eyeball will probably be like ~20px. From these 20px, you need to estimate the eyeball->screen ray. And of course this varies with the screen size.
TLDR: Decent idea, but should've done some napkin math and or quick bounds checking first. Maybe a $5 privacy protector is better.
Here's an idea:
Maybe start by seeing if you can train a primary user gaze tracker first, how well you can get it with modeling and then calibration. Then once you've solved that problem, you can use that as your upper bound of expected performance, and transform the problem to detecting the gaze of people nearby instead of the primary user.