Abstract

This paper presents a hybrid skeleton-driven surface registration (HSDSR) approach to generate temporally consistent meshes from multiple view video of human subjects. 2D pose detections from multiple view video are used to estimate 3D skeletal pose on a per-frame basis. The 3D pose is embedded into a 3D surface reconstruction allowing any frame to be reposed into the shape from any other frame in the captured sequence. Skeletal motion transfer is performed by selecting a reference frame from the surface reconstruction data and reposing it to match the pose estimation of other frames in a sequence. This allows an initial coarse alignment to be performed prior to refinement by a patch-based non-rigid mesh deformation. The proposed approach overcomes limitations of previous work by reposing a reference mesh to match the pose of a target mesh reconstruction, providing a closer starting point for further non-rigid mesh deformation. It is shown that the proposed approach is able to achieve comparable results to existing model-based and model-free approaches. Finally, it is demonstrated that this framework provides an intuitive way for artists and animators to edit volumetric video.

Paper

Hybrid Skeleton Driven Surface Registration for Temporally Consistent Volumetric Video
João Regateiro, Marco Volino and Adrian Hilton
International Conference on 3D Vision (3DV) 2018



Citation

    @inproceedings{Regateiro:3DV:2018,
        AUTHOR = "Regateiro, Joao, and Volino, Marco and Hilton, Adrian",
        TITLE = "Hybrid Skeleton Driven Surface Registration for Temporally Consistent Volumetric Video",
        BOOKTITLE = "International Conference on 3D Vision (3DV)",
        YEAR = "2018",
    }
	    

Data

Public datasets used in this paper can be found in the CVSSP3D Publications and Datasets.

Acknowledgments

This research was supported by the EPSRC Audio-Visual Media Research Platform Grant (EP/P022529/1) and InnovateUK project Total Capture (102685). The authors would also like to thank Charles Malleson for useful discussions.