Skip to yearly menu bar Skip to main content


Poster

BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models

Rizhao Cai · Zirui Song · Dayan Guan · Zhenhao Chen · Yaohang Li · Xing Luo · Chenyu Yi · Alex Kot

[ ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Large Multimodal Models (LMMs) such as GPT-4V and LLaVA have shown remarkable capabilities in visual reasoning on data in common image styles. However, their robustness against diverse style shifts, crucial for practical applications, remains largely unexplored. In this paper, we propose a new benchmark, BenchLMM, to assess the robustness of LMMs toward three different styles: artistic image style, imaging sensor style, and application style. Utilizing BenchLMM, we comprehensively evaluate state-of-the-art LMMs and reveal: 1) LMMs generally suffer performance degradation when working with other styles; 2) An LMM performs better than another model in common style does not guarantee its superior performance in other styles; 3) LMMs' reasoning capability can be enhanced by prompting LMMs to predict the style first, based on which we propose a versatile and training-free method for improving LMMs; 4) An intelligent LMM is expected to interpret the causes of its errors when facing stylistic variations. We hope that our benchmark and analysis can shed new light on developing more intelligent and versatile LMMs. The benchmark and evaluation have been released on https://github.com/AIFEG/BenchLMM

Live content is unavailable. Log in and register to view live content