We introduce MIGS (multi-identity Gaussian splatting), a novel method that learns a single neural representation for multiple identities, using only monocular videos. Recent 3D Gaussian Splatting (3DGS) approaches for human avatars require per-identity optimization. However, learning a multi-identity representation presents advantages in robustly animating humans under arbitrary poses. We propose to construct a high-order tensor that combines all the learnable parameters of our 3DGS representation for all the training identities. By factorizing the tensor, we model the complex rigid and non-rigid deformations of multiple human subjects in a unified network using a reduced number of parameters. Our proposed approach leverages information from all the training identities, enabling robust animation under challenging unseen poses, outperforming existing approaches. We also demonstrate how it can be extended to learn unseen identities.
Live content is unavailable. Log in and register to view live content