Wednesday, July 16, 2025

On CLIP Won't Learn Object-Attribute Binding from Natural Data and Here is Why

Food for thought! What are the limits of natural images versus synthetic images to train models.

Caveat: I did not read the paper.

From the abstract:
"Contrastive vision-language models like CLIP are used for a large variety of applications, such as zero-shot classification or as vision encoder for multi-modal models. Despite their popularity, their representations show major limitations
For instance, CLIP models learn bag-of-words representations and, as a consequence, fail to distinguish whether an image is of "a yellow submarine and a blue bus" or "a blue submarine and a yellow bus".
Previous attempts to fix this issue added hard negatives during training or modified the architecture, but failed to resolve the problem in its entirety.
We suspect that the missing insights to solve the binding problem for CLIP are hidden in the arguably most important part of learning algorithms: the data. In this work, we fill this gap by rigorously identifying the influence of data properties on CLIP's ability to learn binding using a synthetic dataset.
We find that common properties of natural data such as low attribute density, incomplete captions, and the saliency bias, a tendency of human captioners to describe the object that is "most salient" to them have a detrimental effect on binding performance.
In contrast to common belief, we find that neither scaling the batch size, i.e., implicitly adding more hard negatives, nor explicitly creating hard negatives enables CLIP to learn reliable binding. Only when the data expresses our identified data properties CLIP learns almost perfect binding."

[2507.07985] CLIP Won't Learn Object-Attribute Binding from Natural Data and Here is Why (open access)

No comments: