Abstract

This paper introduces a general approach to dynamic scene reconstruction from multiple moving cameras without prior knowledge or limiting constraints on the scene structure, appearance, or illumination. Existing techniques for dynamic scene reconstruction from multiple wide-baseline camera views primarily focus on accurate reconstruction in controlled environments, where the cameras are fixed and calibrated and background is known. These approaches are not robust for general dynamic scenes captured with sparse moving cameras. Previous approaches for outdoor dynamic scene reconstruction assume prior knowledge of the static background appearance and structure. The primary contributions of this paper are twofold: an automatic method for initial coarse dynamic scene segmentation and reconstruction without prior knowledge of background appearance or structure; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes from multiple wide-baseline static or moving cameras. Evaluation is performed on a variety of indoor and outdoor scenes with cluttered backgrounds and multiple dynamic non-rigid objects such as people. Comparison with state-of-the-art approaches demonstrates improved accuracy in both multiple view segmentation and dense reconstruction. The proposed approach also eliminates the requirement for prior knowledge of scene structure and appearance.

Paper

General Dynamic Scene Reconstruction from Multiple View Video
Armin Mustafa, Hansung Kim, Jean-Yves Guillemaut and Adrian Hilton
ICCV 2015






Data

Data used in this work can be found in the CVSSP 3D Data Repository.

Citation

			@inproceedings{MustafaICCV15,
				author = {Mustafa, A. and Kim, H. and Guillemaut,J.Y. and Hilton, A.},
				title = {General dynamic scene reconstruction from wide-baseline views },
				booktitle = {ICCV},
				year = {2015}
			}
		

Acknowledgments

This research was supported by the European Commission, FP7 IMPART: Intelligent Management Platform for Advanced Real-time Media Processes project (grant 316564).