Skip to yearly menu bar Skip to main content


Poster

UMBRAE: Unified Multimodal Brain Decoding

Weihao Xia · Raoul de Charette · Cengiz Oztireli · Jing-Hao Xue

# 119
[ ] [ Project Page ] [ Paper PDF ]
Wed 2 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

In this paper, we aim to tackle the two prevailing challenges in brain-powered research. To extract instance-level conceptual and spatial details from neural signals, we introduce an efficient universal brain encoder for multimodal-brain alignment and recover object descriptions at multiple levels of granularity from subsequent multimodal large language models. To overcome unique brain patterns of different individuals, we introduce a cross-subject training strategy. This allows neural signals from multiple subjects to be trained within the same model without additional training resources or time, and benefits from user diversity, yielding better results than focusing on a single subject. To better assess our method, we have introduced a comprehensive brain understanding benchmark BrainHub. Experiments demonstrate that our proposed method UMBRAE not only achieves superior results in the newly introduced tasks but also outperforms models in established tasks with recognized metrics. Code and data will be made publicly available to facilitate further research.

Live content is unavailable. Log in and register to view live content