Skip to yearly menu bar Skip to main content


Poster

Global-to-Pixel Regression for Human Mesh Recovery

Yabo Xiao · MINGSHU HE · Dongdong Yu

# 216
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Existing human mesh recovery (HMR) methods commonly leverage the global or dense-annotations-based local features to produce a single prediction from input image. However, the compressed global feature and local features disrupt the spatial geometry of human body and make it hard to capture the local dynamics, resulting in visual-mesh misalignment. Moreover, dense annotations are labor-intensive and expensive. Toward above issues, we propose a global-to-pixel wise prediction framework to preserve spatial information and obtain precise visual-mesh alignments for top-down HMR. Specifically, we present an adaptive 2D Keypoint-Guided Local Encoding Module to enable per-pixel features to capture fine-grained body part information with structure and local context maintained. The acquisition of local features rely exclusively on sparse 2D keypoint guidance without any dense annotations or heuristics keypoint-based ROI pooling. The enhanced pixel features are used to predict residuals for rectifying initial estimation produced by global feature. Secondly, we introduce a Dynamic Matching Strategy that determines positive/negative pixels by only calculating the classification and 2D keypoint costs to further improve visual-mesh alignments. The comprehensive experiments demonstrate the effectiveness of network design. Our framework outperforms previous local regression methods by a large margin and achieves state-of-the-art performance on Human3.6m and 3DPW datasets.

Live content is unavailable. Log in and register to view live content