Textualize Visual Prompt for Image Editing via Diffusion Bridge

1Western University  2VIVO  3Xidian University

The visual prompt defines the visual transformation, which is difficult to describe accurately by language, by a before-and-after image pair. Our method learns such delicate transformation into pseudo text (<A> and <C>), supports hybrid editing with natural text, and can control the intensity of editing with rigorous consistency.

Abstract

Visual prompt, a pair of before-and-after edited images, can convey indescribable imagery transformations and prosper in image editing. However, current visual prompt methods rely on a pretrained text-guided image-to-image generative model that requires a triplet of text, before, and after images for retraining over a text-to-image model. Such crafting triplets and retraining processes limit the scalability and generalization of editing. In this paper, we present a framework based on any single text-to-image model without reliance on the explicit image-to-image model thus enhancing the generalizability and scalability. Specifically, by leveraging the probability-flow ordinary equation, we construct a diffusion bridge to transfer the distribution between before-and-after images under the text guidance. By optimizing the text via the bridge, the framework adaptively textualizes the editing transformation conveyed by visual prompts into text embeddings without other models. Meanwhile, we introduce differential attention control during text optimization, which disentangles the text embedding from the invariance of the before-and-after images and makes it solely capture the delicate transformation and generalize to edit various images. Experiments on real images validate competitive results on the generalization, contextual coherence, and high fidelity for delicate editing with just one image pair as the visual prompt.

Learning Delicate Visual Transformation via Diffusion Bridge

sample7
Left: The before-image is first transferred to a deterministic latent encoding via the unconditional model and then to the after-image under the text guidance. The text embeddings are optimized with fixed start (latent $\mathbf{x}_T$) and end (after-image $\mathbf{x}_0^a$) states. Right: In training, the attention of the before-image $M_t^b$ is first timed with the column-transformation matrix $\mathbf{\Lambda}$ to switch the column of token, then masked with $\mathbf{F}$. The attention of the after-image $M_t^a$ is masked with $1-\mathbf{F}$ to get the attention of the $y$ tokens. The final $M_t$ is the addition of two masked attentions. This preserves the linguistic format of the cross-attention and enables the embedding to learn disentangled and generalized visual transformation.

Qualitative Comparisons on Real Images

Inference Overview

Visual prompts with different editing types and different levels of geometric changes. Our method generalizes to different editing types and scenes while preserving different levels of geometric structures.

BibTeX

@inproceedings{xu2025textualize,
      title={Textualize Visual Prompt for Image Editing via Diffusion Bridge},
      author={Xu, Pengcheng and Fan, Qingnan and Kou, Fei and Qin, Shuai and Gu, Hong and Zhao, Ruoyu and Ling, Charles and Wang, Boyu},
      booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
      year={2025}
      }