Kernel-Free Video Deblurring via Synthesis

  Feitong Tan,    Shuaicheng Liu,    Liaoyuan Zeng,    Bing Zeng

School of Electronic Engineering
University of Electronic Science and Technology of China

Abstract:

Shaky cameras often capture videos with motion blurs, especially when the light is insufficient (e.g., dimly-lit indoor environment or outdoor in a cloudy day). In this paper, we present a framework that can restore blurry frames effectively by synthesizing the details from sharp frames. The uniqueness of our approach is that we do not require blur kernels, which are needed previously either for deconvolution or convolving with sharp frames before patch matching. We develop this kernel-free method mainly because accurate kernel estimation is challenging due to noises, depth variations, and dynamic objects. Our method compares a blur patch directly against sharp candidates, in which the nearest neighbor matches can be recovered with sufficient accuracy for the deblurring. Moreover, to restore one blurry frame, instead of searching over a number of nearby sharp frames, we only search from a synthesized sharp frame that is merged by different regions from different sharp frames via a MRF-based region selection. Our experiments show that this method achieves a competitive quality in comparison with the state-of-the-art approaches with an improved efficiency and robustness.


Paper: [PDF]

Demo Video: [mp4(97Mb)]


 

Downloads: results.zip (input and output videos, 1280x720, 47MB)