Poster
Take A Step Back: Rethinking the Two Stages in Visual Reasoning
Mingyu Zhang · Jiting Cai · Mingyu Liu · YUE XU · Cewu Lu · Yong-Lu Li
# 170
Strong Double Blind |
Visual reasoning, as a prominent research area, plays a crucial role in AI by facilitating concept formation and interaction with the world. However, current works are usually carried out separately on small datasets thus lacking generalization ability. Through rigorous evaluation on diverse benchmarks, we demonstrate the shortcomings of existing methods in achieving cross-domain reasoning and their tendency to data bias fitting. In this paper, we revisit visual reasoning with a two-stage perspective: (1) symbolization and (2) logical reasoning given symbols or their representations. We find that the reasoning stage is better at generalization than symbolization. Thus, it is more efficient to implement symbolization via separated encoders for different data domains while using a shared reasoner. Given our findings, we establish design principles for visual reasoning frameworks following the separated symbolization and shared reasoning. Our two-stage framework achieves impressive generalization ability on various visual reasoning tasks, including puzzles, physical prediction, and visual question answering (VQA), encompassing both 2D and 3D modalities. We believe our insights will pave the way for generalizable visual reasoning. Our code will be publicly available.
Live content is unavailable. Log in and register to view live content