Recent research has explored implicit representations, such as signed distance function (SDF), for interacting hand-object reconstruction. SDF enables modeling hand-held objects with arbitrary topology and overcomes the resolution limitations of parametric models, allowing for finer-grained reconstruction. However, modeling elaborate SDF directly based on visual features faces challenges due to depth ambiguity and appearance similarity, especially in cluttered real-world scenes. In this paper, we propose a coarse-to-fine SDF framework for 3D hand-object reconstruction, which leverages the perceptual advantages of RGB-D modality in both visual and geometric aspects, to progressively model the implicit field. Initially, we model coarse-level SDF using global image features to achieve a holistic perception of 3D scenes. Subsequently, we propose a 3D Point-Aligned Implicit Function (3D PIFu) for fine-level SDF learning, which leverages the local geometric clues of the point cloud to capture intricate details. To facilitate the transition from coarse to fine, we extract hand-object semantics from the implicit field as prior knowledge. Additionally, we propose a surface-aware efficient reconstruction strategy that sparsely samples query points based on the hand-object semantic prior. Experiments on two challenging hand-object datasets show that our method outperforms existing methods by a large margin.
Live content is unavailable. Log in and register to view live content