In the realm of character animation workflows, learned keyframe interpolation algorithms have been extensively researched, which necessitates the availability of large motion datasets. However, owing to the different hierarchical skeletal structures, such datasets often lack cross-compatibility between their native motion skeleton and the desired for different applications. To reconfigure motion data to new skeletons, motion re-targeting is essential. Yet, conventional re-targeting methods are incompatible with concurrent animation workflows, while learned methods require the existence of pre-established datasets for new skeletons. In this paper, we propose the first unsupervised learning approach, namely Point Cloud-based Motion Representation Learning (PC-MRL), for re-targeting motions from human motion datasets to any human skeleton with motion keyframe interpolation. PC-MRL consists of a point cloud obfuscation with skeletal sampling and an unsupervised skeleton reconstruction. The point cloud space is geometry-independent to represent 3D pose and motion data, and effectively obscures any skeletal configuration, ensuring the cross-skeleton consistency. In this space, a cross-skeleton K-nearest neighbors loss is devised for unsupervised learning. Moreover, a first-frame offset quaternion is devsied to represent rotations with relative roll for motion interpolation. Comprehensive experiments demonstrate the effectiveness of PC-MRL in motion interpolation without using target skeletal motion data. We also achieved superior reconstruction metrics for re-targeting.
Live content is unavailable. Log in and register to view live content