Video Stabilization with a depth camera

   Shuaicheng Liu1   Yinting Wang1,2   Lu yuan3    Jiajun Bu2      Ping Tan1    Jian Sun3

1. National University of Singapore       2. Zhejiang University        3. Microsoft Research


Previous video stabilization methods often employ homographies to model transitions between consecutive frames, or require robust long feature tracks. However,the homography model is invalid for scenes with significant depth variations, and feature point tracking is fragile in videos with textureless objects, severe occlusion or camera rotation. To address these challenging cases, we propose to solve video stabilization with an additional depth sensor such as the Kinect camera. Though the depth image is noisy, incomplete and low resolution, it facilitates both camera motion estimation and frame warping, which makes the video stabilization a much well posed problem. The experiments demonstrate the effectiveness of our algorithm.


Paper [PDF]

Related Projects

Shuaicheng Liu, Lu yuan, Ping Tan, Jian Sun. Bundled Camera Paths for Video Stabilization. ACM Transactions on Graphics (Proceeding of SIGGRAPH) 2013. [PDF][project page]

Shuaicheng Liu, Lu Yuan, Ping Tan, Jian Sun: SteadyFlow: Spatially Smooth Optical Flow for Video Stabilization. IEEE Conference on Computer Vision and Patten Recognition(CVPR) 2014 [PDF][project page]





Frame generation pipeline:

We use the color and depth images in (a) to generate the projection in (b) and the motion field in(c). Many pixels aremissing because of the incomplete depth image. Hence, we warp the color image by the ‘content-preserving’ warping in (d) according to the green control points and a regular grid. This warping generate a color image (e) and a motion field (f). We then generate a complete motion field (g) by combining (c) and (f). The final video frame (h) is created by warp the original frame with (g).