Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation

ICCV 2023

Jay Zhangjie Wu1 Yixiao Ge3 Xintao Wang3 Stan Weixian Lei1 Yuchao Gu1 Yufei Shi1 Wynne Hsu2 Ying Shan3 Xiaohu Qie4 Mike Zheng Shou1

1Show Lab,2National University of Singapore 3ARC Lab,4Tencent PCG


 

 

Abstract

To replicate the success of text-to-image (T2I) generation, recent works employ large-scale video datasets to train a text-to-video (T2V) generator. Despite their promising results, such paradigm is computationally expensive. In this work, we propose a new T2V generation setting—One-Shot Video Tuning, where only one text-video pair is presented. Our model is built on state-of-the-art T2I diffusion models pre-trained on massive image data. We make two key observations: 1) T2I models can generate still images that represent verb terms; 2) extending T2I models to generate multiple images concurrently exhibits surprisingly good content consistency. To further learn continuous motion, we introduce Tune-A-Video, which involves a tailored spatio-temporal attention mechanism and an efficient one-shot tuning strategy. At inference, we employ DDIM inversion to provide structure guidance for sampling. Extensive qualitative and numerical experiments demonstrate the remarkable ability of our method across various applications.

 

Method

Given a text-video pair (e.g., “a man is skiing”) as input, our method leverages the pretrained T2I diffusion models for T2V generation. During fine-tuning, we update the projection matrices in attention blocks using the standard diffusion training loss. During inference, we sample a novel video from the latent noise inverted from the input video, guided by an edited prompt (e.g., “Spider Man is surfing on the beach, cartoon style”).

 

Results


Pretrained T2I (Stable Diffusion)


Pretrained T2I (personalized)


Pretrained T2I (pose control)

 

Bibtex


    @inproceedings{wu2023tune,
        title={Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation},
        author={Wu, Jay Zhangjie and Ge, Yixiao and Wang, Xintao and Lei, Stan Weixian and Gu, Yuchao and Shi, Yufei and Hsu, Wynne and Shan, Ying and Qie, Xiaohu and Shou, Mike Zheng},
        booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
        pages={7623--7633},
        year={2023}
    }