Skip to yearly menu bar Skip to main content


Poster

HyperSpaceX: Radial and Angular Exploration of HyperSpherical Dimensions

Chiranjeev Chiranjeev · Muskan Dosi · Kartik Thakral · Mayank Vatsa · Richa Singh

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Project Page ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Traditional deep learning models rely on methods such as softmax cross-entropy and ArcFace loss for tasks like classification and face recognition. These methods mainly explore angular features in a hyperspherical space, often resulting in entangled inter-class features due to dense angular data across many classes. In this paper, a new field of feature exploration is proposed known as \textit{HyperSpaceX} which enhances class discrimination by exploring both angular and radial dimensions in multi-hyperspherical spaces, faciliated by a novel \textit{DistArc} loss. The proposed DistArc loss encompasses three feature arrangement components: two angular and one radial, enforcing intra-class binding and inter-class separation in multi-radial arrangement improving feature discriminability. Evaluation of \textit{HyperSpaceX} framework for the novel representation utilizes a proposed predictive measure that accounts for both angular and radial elements, providing a more comprehensive assessment of model accuracy beyond standard metrics. Experiments across six object classification and five face recognition datasets demonstrate state-of-the-art \textit{(SoTA)} results obtained from \textit{HyperSpaceX}, achieving up to a 20\% performance improvement on large-scale object datasets.

Live content is unavailable. Log in and register to view live content