Skip to yearly menu bar Skip to main content


Poster

Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation

Haizhong Zheng · Jiachen Sun · Shutong Wu · Bhavya Kailkhura · Zhuoqing Morley Mao · Chaowei Xiao · Prakash Atul

# 51
[ ] [ Paper PDF ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Given a real-world dataset, data condensation (DC) aims to synthesize a small synthetic dataset that captures the knowledge of a natural dataset while being usable for training models with comparable accuracy. Recent works propose to enhance DC with data parameterization, which condenses data into very compact parameterized data containers instead of images. By optimizing with an appropriate loss function, data parameterization methods can generate high-quality synthetic datasets and achieve improved model performance. Top-performing data parameterization methods use GPU memory intensive trajectory-based losses in their optimization. In this paper, we propose a novel data parameterization architecture, Hierarchical Memory Network (HMN), that achieves comparable or better performance to SOTA methods, even with a GPU memory friendly batch-based loss function. HMN's key insight is to directly capture sharing of features at both within-class level and across-class level by proposing a hierarchical parameterized architecture. We evaluate HMN on five public datasets and show that HMN outperforms current baselines (including those using trajectory-based losses), even when HMNs are trained with a GPU-friendly batch-based loss.

Live content is unavailable. Log in and register to view live content