OpenBias: Open-set Bias Detection in Text-to-Image Generative Models

1University of Trento, 2UT Austin, 3SHI Labs @ Georgia Tech & UIUC, 4Picsart AI Research (PAIR)

CVPR 2024 (Highlight)

Abstract

Text-to-image generative models are becoming increasingly popular and accessible to the general public. As these models see large-scale deployments, it is necessary to deeply investigate their safety and fairness to not disseminate and perpetuate any kind of biases. However, existing works focus on detecting closed sets of biases defined a priori, limiting the studies to well-known concepts. In this paper, we tackle the challenge of open-set bias detection in text-to-image generative models presenting OpenBias, a new pipeline that identifies and quantifies the severity of biases agnostically, without access to any precompiled set. OpenBias has three stages. In the first phase, we leverage a Large Language Model (LLM) to propose biases given a set of captions. Secondly, the target generative model produces images using the same set of captions. Lastly, a Vision Question Answering model recognizes the presence and extent of the previously proposed biases. We study the behavior of Stable Diffusion 1.5, 2, and XL emphasizing new biases, never investigated before. Via quantitative experiments, we demonstrate that OpenBias agrees with current closed-set bias detection methods and human judgement.



OpenBias

Starting with a dataset of real textual captions (T ) we leverage a Large Language Model (LLM) to build a knowledge base B of possible biases that may occur during the image generation process. In the second stage, synthesized images are generated using the target generative model conditioned on captions where a potential bias has been identified. Finally, the biases are assessed and quantified by querying a VQA model with caption-specific questions extracted during the bias proposal phase.


MY ALT TEXT

Try out OpenBias!

In this interactive interface you may experience various biases found by OpenBias.

The interface shows a comparison between Stable Diffusion XL, 2 and 1.5.

Please choose a Bias and relative context to proceed.



Biases


Context

Context

Context

Context

Context

Context

Context

Context

Context

Context

Context

Context

Context

Context




Poster

BibTeX

        
@InProceedings{D'Inca_2024_CVPR,
    author    = {D'Inc\`a, Moreno and Peruzzo, Elia and Mancini, Massimiliano and Xu, Dejia and Goel, Vidit and Xu, Xingqian and Wang, Zhangyang and Shi, Humphrey and Sebe, Nicu},
    title     = {OpenBias: Open-set Bias Detection in Text-to-Image Generative Models},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2024},
    pages     = {12225-12235}
}