Skip to yearly menu bar Skip to main content


Poster

Navigating Text-to-Image Generative Bias across Indic Languages

Surbhi Mittal · Arnav Sudan · MAYANK VATSA · RICHA SINGH · Tamar Glaser · Tal Hassner

# 329
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

This research delves into evaluating the effectiveness of text-to-image (T2I) models specifically tailored for Indic languages prevalent across India. It scrutinizes the comparative generative capabilities of popular T2I models in Indic languages specifically against their performance in English. With this benchmark, we meticulously assess 30 Indic languages utilizing 2 open-source diffusion models and 2 commercial APIs for generation. The primary objective of this benchmark is to gauge the adequacy of support offered by these models for Indic languages while pinpointing areas that require enhancement. With a linguistic diversity encompassing 30 languages spoken by a population exceeding a billion, the benchmark endeavors to deliver a thorough and insightful evaluation of T2I models within the realm of Indic languages.

Live content is unavailable. Log in and register to view live content