Generated by Stable Diffusion 3.

Spring 2025 - Web 15:10-18:00 PM, Peking University

This course primarily introduces the most widely used generative models, including Autoregressive (AR) Models, Variational Autoencoders (VAEs), Normalizing Flow Models, Generative Adversarial Networks (GANs), Energy-based Models, Score-Based Models, Diffusion Models, Mamba Models, and Hybrid Generative Models. It also covers various efficient methods to accelerate generative models, as well as their applications in areas such as Large Language Models (LLMs), video generation, 3D and geometry, robotics and AI agents, material science, medicine, protein and biology. Additionally, the course explores the principles behind some of the most popular recent models, such as ChatGPT, DeepSeek-V3, DeepSeek-R1, Sora, AlphaGo Zero, and AlphaFold 3, etc.

This is a graduate-level course designed for students who are currently conducting or planning to conduct research on deep generative models.

Syllabus

10 Weeks

  • 1: Introduction
  • 2: Autoregressive (AR) Models
  • 3: Variational Autoencoders (VAEs)
  • 4: Generative Adversarial Networks (GANs)
  • 5: Normalizing Flow Models
  • 6: Energy-based Models
  • 7: Score-Based Models
  • 8: Diffusion Models
  • 9: Flow Matching
  • 10: SSM Models
  • 11: Hybrid Generative Models
  • 12: Efficient Generative Models
  • 13: Evaluation Metrics
  • 14: Applications of Generative Models (LLMs, Videos, 3D and Geometry, Robotics and AI Agents, Material Science, Medicine, Protein and Biology)

6 Weeks

  • 3 Weeks: Paper Reading
  • 3 Weeks: Project Presentation

Course Staff

Feedback

For questions, please discuss on the Wechat group. You can also email Dr. Hao Tang at hao.tang@pku.edu.cn.