🏆 >>>>>>>> [🧡中文简要介绍💜] <<<<<<<<<<
本项目认真总结了👍2D数字人动作视频生成👏相关领域的最新进展,包括论文、数据集和代码库。
Repo以 Vision-driven、Text-driven、Audio-driven 三大方向作以总结,同时考虑 LLM Planning 前沿论文。
分类时,我们定义Audio>Text>Vision优先级,当出现文本不出现音频时,归纳为Text-Driven方法,当文本音频同时出现时,归纳为Audio-Driven方法,以此类推。
区别于以往的总结,项目明确总结了数字人视频生成领域的五大阶段:
🌑 第1阶段 明确驱动源(Vision、Text、Audio)与驱动区域(Part、Holistic),其中Part主要以脸部为主;
🌒 第2阶段 动作规划阶段,大多数工作以特征Mapping学习动作映射,少部分工作以大语言模型LLMs进行动作规划;
🌓 第3阶段 人体视频生成,大部分工作以Diffusion Models为基础,少部分工作以Transformer为基础;
🌔 第4阶段 视频优化阶段,针对脸部、嘴唇、牙齿、手部单独做Refinement优化;
🌕 第5阶段 加速输出阶段,尽可能地加速训练与部署推理,目标Real-Time实时输出。
🔑本项目由六位核心成员全力推进:
- 薛海威(清华大学,负责人) - 罗向阳(清华大学) - 胡璋昊(爱丁堡大学) - 张鑫(西安交通大学) - 向迅之(中国科学院大学) - 戴语琴(南京理工大学)
💖核心综述由以下老师全力支持并悉心指导:
- 刘健庄老师(中国科学院深圳先进技术研究院) - 张镇嵩博士(华为诺亚2012实验室) - 李明磊博士(零一万物) - 马飞博士(光明实验室) - 吴志勇老师(清华大学/香港中文大学)
🎉 欢迎大家贡献自己的研究成果并PR,共同推动人体运动视频生成技术的发展。
如有任何问题,可以随时联系邮件([email protected]),我们会尽快回复。
另外,我们非常欢迎有新的相关领域的同学一同加入我们,一起学习,无限进步!
🎁 >>>>>>>> [English Introduction] <<<<<<<<<<
This project provides a thorough summary of the latest advancements in the field of 2D digital human motion video generation, covering papers, datasets, and code repositories.
The repository is organized into three main conditions: Vision-driven, Text-driven, and Audio-driven, while also considering LLM Planning Papers.
Unlike previous summaries, this project clearly outlines the five key stages in the field of digital human video generation:
🌑 Stage 1: Input Phase. Clarifying the driving source (Vision, Text, Audio) and driving region (Part, Holistic), where "Part" mainly refers to the face;
🌒 Stage 2: Motion planning Phase. Most work involves feature mapping to learn motion mappings, while a few works use large language models (LLMs) for motion planning;
🌓 Stage 3: Motion Video Generation Phase;
🌔 Stage 4: Video Refinement Phase, focusing on optimizing specific parts such as the face, lips, teeth, and hands;
🌕 Stage 5: Acceleration Phase, aiming to speed up training and deployment inference as much as possible, with the goal of achieving real-time output.
🎉 We welcome everyone to contribute your research and submit PRs to collectively advance the technology of human motion video generation.
If you have any questions, feel free to contact us at ([email protected]), and we will respond as soon as possible. Additionally, we warmly welcome new members from related fields to join us, learn together, and make endless progress!
🍦 Exploring the latest papers in human motion video generation. 🍦
This work delves into Human Motion Video Generation, covering areas such as Portrait Animation, Dance Video Generation, Text2Face, Text2MotionVideo, and Talking Head. We believe this will be the most comprehensive survey to date on human motion video generation technologies. Please stay tuned! 😘😁😀
It's important to note that for the sake of clarity, we have excluded 3DGS and NeRF technologies (2D-3D-2D) from the scope of this paper.
If you discover any missing work or have any suggestions,
please feel free to submit a pull request or contact us ( [email protected] ).
We will promptly add the missing papers to this repository.
[1] We decompose human motion video generation into five key phases, covering all subtasks across various driving sources and body regions. To the best of our knowledge, this is the first survey to offer such a comprehensive framework for human motion video generation.
[2] We provide an in-depth analysis of human motion video generation from both motion planning and motion generation perspectives, a dimension that has been underexplored in existing reviews.
[3] We clearly delineate established baselines and evaluation metrics, offering detailed insights into the key challenges shaping this field.
[4] We present a set of potential future research directions, aimed at inspiring and guiding researchers in the field of human motion video generation.
Part (Face) || Portrait Animation
Holistic Human || Video-Guided Dance Video Generation
Holistic Human || Pose-Guided Dance Video Generation
Holistic Human || Try-On Video Generation
Holistic Human || Pose2Video
Part (Face) || Text2Face
Holistic Human || Text2MotionVideo
Part (Face) || Lip Synchronization
Part (Face) || Head Pose Driving
Holistic Human || Audio-Driven Holistic Body Driving
Part (Face) || Fine-Grained Style and Emotion-Driven Animation
LLM for 2D
LLM for 3D
If you find our survey and repository useful for your research project, please consider citing our paper:
Contributions are welcome! Please feel free to create an issue or open a pull request with your contributions.
Haiwei Xue 💻 🎨 🤔 |
Xiangyang Luo 🐛 |
Zhanghao Hu 🥙 💻 |
Xin Zhang 😘🎪 😍 |
Xunzhi Xiang 🛴 😍 |
Yuqin Dai 😘 👸 |
This project is licensed under the MIT License - see the LICENSE file for details.
We would like to acknowledge the contributions of all researchers and developers in the field of human motion video generation. Their work has been instrumental in the advancement of this technology.