Estimating Camera Motion from a Single Motion-Blurred Image
19 comments
·March 28, 2025porker
Going to read this in depth. I have been on and off trying to find a way to distinguish between in focus, motion blur (camera shake) and out of focus. Motion blur is surprisingly tricky to distinguish using a computer from the others. I can glance at a 2D Fft plot and tell, but the computer - not yet.
fp64
Blind Deconvolution is around for a while and from the estimated PSF you can gather the shake pattern (or tell whether it’s just overall out of focus). It just never worked good enough in practice but I remember some impressive results already a decade ago
porker
Thank you, blind deconvolution and PSF sound like what I have been scratching around the edges of in my experiments, without knowing the right terms to search for to discover prior work. I shall dig into the literature!
anovikov
Can't upvote this enough. Finally a piece that's not about LLMs or how are they going to ruin the world.
fortran77
And not about Rust, either!
damnitbuilds
And no cod psychology about how to live a better life, either!
dylan604
or worse, make the world a better place
InDubioProRubio
Three days after posting- gets inverted and reused as a directional camera motion blur shader ever after
DrNosferatu
Doesn’t the Point Spread Function already achieve this?
drsopp
Isn't a logical next step to extract the depth field? Possible?
tetris11
Isn't the depth decoder part of the processing already?
drsopp
I should have read the abstract.
skywal_l
Right there in the abstract:
"Our approach works by predicting a dense motion flow field and a monocular depth map directly from a single motion-blurred image"
wiz21c
I'm not familiar with all of this but is there then a tool to remove the blur ?
atoav
Deconvolution can be used for this, also see this currated list of resources: https://github.com/CVHW/Deblurring
xattt
That’s for the next grad student to solve.
esafak
Yes, those have existed for decades, with various degrees of success.
Imustaskforhelp
Can this be used in LLM's like gemini which don't really have a motion idea but rather they just take things frame by frame and so can't really understand motion b/w them
Visual effects, VFX, has been doing this type of work for decades. The guy that I know doing this the best, Eugene Vendrovsky ended up at Nvidia as a robotics computer vision division director. Back in the early 2000's he had all kinds of camera motion recovery algorithms we used in the Tracking department for use on feature films like the live action Scooby Doo, Riddick and the Narnia films. Eugene retired recently, after a very productive career.