Hao Tang

School of Computer Science, Peking University
Office: 5 Yiheyuan Road, Haidian District, Beijing, 100871, China🇨🇳
Email: bjdxtanghao@gmail.com

Hey, thanks for stopping by! 👋

I am a tenure-track Assistant Professor at the School of Computer Science, Peking University, China🇨🇳. Previously, I held postdoctoral positions at both CMU, USA🇺🇸, and ETH Zürich, Switzerland🇨🇭. My academic journey includes earning a master's degree from Peking University, China🇨🇳, and completing my Ph.D. at University of Trento, Italy🇮🇹. Additionally, I had the privilege of being a visiting Ph.D. student at the University of Oxford, UK🇬🇧. Furthermore, I undertook a visiting internship at IIAI, UAE🇦🇪.

Beyond academia, I have also had the honor of serving as a senior technical advisor for numerous AI startups, including those in USA🇺🇸, UK🇬🇧, Romania🇷🇴, and China🇨🇳, with technologies ranging from Efficient AI to AIGC to AI4Blockchain, etc.

News & Events

2024-10 We have 1 paper (Quantization on Bird's-Eye View Representation) accepted to WACV 2025 and 1 paper (Semantic Segmentation on Autonomous Vehicles Platform) accepted to TCAD 2024.
2024-09 🎉🎉🎉I was elected as one of the World's Top 2% Scientists in 2024 by Stanford University, and we have 1 paper (Camera-Agnostic Attack) accepted to NeurIPS 2024 and 1 paper (Medical Image Segmentation) accepted to ACCV 2024.
2024-08 🎉🎉🎉I was invited as a speaker at the 2nd Workshop & Challenge on Micro-gesture Analysis for Hidden Emotion Understanding (MiGA) at IJCAI 2024 and we have 2 papers (Guided Image Translation + 3D Human Pose Estimation) accepted to PR 2024.
2024-07 We have 6 papers (Motion Mamba + Dataset Growth + Story Visualization and Completion + Diffusion Model for Semantic Image Synthesis + Generalizable Image Editing + 3D Semantic Segmentation) accepted to ECCV 2024, 1 paper (Survey about Physical Adversarial Attack) accepted to TPAMI 2024, and two papers (Talking Head Avatar + Story Visualization and Continuation) accepted to ACM MM 2024.
2024-06 🎉🎉🎉I joined Peking University as an Assistant Professor.
2024-04 🎉🎉🎉I received offers from MIT and Harvard University.
2024-02 We have 7 papers (Explanation for ViT + Faithfulness of ViT + Diffusion Policy for Versatile Navigation + Subject-Driven Generation [Final rating: 455] + Diffusion Model for 3D Hand Pose Estimation + Adversarial Learning for 3D Pose Transfer + Efficient Diffusion Distillation [224->235]) accepted to CVPR 2024.
2024-01 We have 1 paper (Architectural Layout Generation) accepted to TPAMI 2024.
2023-12 We have 1 paper (Sign Pose Sequence Generation) accepted to AAAI 2024.
2023-10 🎉🎉🎉I was elected as one of the World's Top 2% Scientists in 2023 by Stanford University and we have 4 papers (BEV Perception + Efficient ViT + 3D Motion Transfer + Graph Distillation) accepted to NeurIPS 2023.
2023-09 We have 1 paper (Practical Blind Image Denoising) accepted to MIR 2023 and 1 paper (Diffusion Model for HDR Deghosting) accepted to TCSVT 2023.
2023-08 🎉🎉🎉I received an offer from CMU.
2023-07 We have 1 paper (Semantic Image Synthesis) accepted to TPAMI 2023.
2023-06 We have 1 paper (Visible-Infrared Person Re-ID) accepted to ICCV 2023.
2023-05 We have 2 papers (Image Restoration Dataset + 3D-Aware Video Generation) accepted to CVPRW 2023 and 1 paper (3D Face Generation) accepted to JSTSP 2023.
2023-04 We have 1 paper (Speed-Aware Object Detection) accepted to ICML 2023, 2 papers (Lottery Ticket Hypothesis for ViT + Zero-shot Character Recognition) accepted to IJCAI 2023, 1 paper (3D Human Pose Estimation) accepted to PR 2023, and 1 paper (SAR Target Recognition) accepted to TGRS 2023.
2023-03 We have 6 papers (HDR Deghosting + Point Cloud Registration + Graph-Constrained House Generation + Mathematical Architecture Design + Text-to-Image Synthesis + Efficient Semantic Segmentation) accepted to CVPR 2023.
2023-02 We have 3 papers (Camouflaged Object Detection + Brain Vessel Image Segmentation + Cross-View Image Translation) accepted to ICASSP 2023 and 1 paper (Camouflaged Object Detection) accepted to TCSVT 2023.
2023-01 We have 1 paper (Semantic Image Synthesis) accepted to ICLR 2023 and 1 paper (Human Reaction Generation) accepted to TMM 2023.
2022-11 We have 4 papers (Real-Time Segmentation + Wearable Design + Efficient ViT Training + Text-Guided Image Editing) accepted to AAAI 2023, 1 paper accepted (Person Pose and Facial Image Synthesis) to IJCV 2022, 1 paper (Salient Object Detection) accepted to TIP 2022, and 1 paper (Object Detection Transformer) accepted to TCSVT 2022.
2022-10 We have 1 paper (Sinusoidal Neural Radiance Fields) accepted to BMVC 2022 and 1 paper (Guided Image-to-Image Translation) accepted to TPAMI 2022.
2022-09 We have 1 paper (Facial Expression Translation) accepted to TAFFC 2022 and 1 paper (Ship Detection) accepted to TGRS 2022.
2022-07 We have 5 papers (Real-Time SR + Video SR + Soft Token Pruning for ViT + 3D-Aware Human Synthesis + Video Semantic Segmentation) accepted to ECCV 2022, 1 paper (Gaze Correction and Animation) accepted to TIP 2022, and 1 paper (Cross-view Panorama Image Synthesis) accepted to PR 2022.
2022-06 We have 2 papers (Character Image Restoration + Character Image Denoising) accepted to ACM MM 2022.
2022-04 We have 1 paper (Real-Time Portrait Stylization) accepted to IJCAI 2022, 1 paper (Wide-Context Transformer for Semantic Segmentation) accepted to TGRS 2022, and 1 paper (Incremental Learning for Semantic Segmentation) accepted to TMM 2022.
2022-03 We have 5 papers (Text-to-Image Synthesis + 3D Human Pose Estimation + Text-Driven Image Manipulation + 3D Face Modeling + 3D Face Restoration) accepted to CVPR 2022, 1 paper (Image Generation) accepted to TPAMI 2022, and 1 paper (Cross-View Panorama Image Synthesis) accepted to TMM 2022.
2021-12 We have 2 papers (Generalized 3D Pose Transfer + Audio-Visual Speaker Tracking) accepted to AAAI 2022.
2021-11 We have 1 paper (Building Extraction in VHR Remote Sensing Images) accepted to TIP 2021.
2021-10 We have 3 papers (Cross-View Image Translation + Data-driven 3D Animation + Natural Image Matting) accepted to BMVC 2021.
2021-08 We have 1 paper (Layout-to-Image Translation) accepted to TIP 2021 and 1 paper (Unpaired Image-to-Image Translation) accepted to TNNLS 2021.
2021-07 We have 2 papers (Continuous Pixel-Wise Prediction + Unsupervised 3D Pose Transfer) accepted to ICCV 2021.
2021-06 We have 1 paper (Cross-View Exocentric to Egocentric Video Synthesis) accepted to ACM MM 2021 and 1 paper (Total Generate) accepted to TMM 2021.
2021-05 🎉🎉🎉I received an offer from ETH Zurich.
2020-09 🎉🎉🎉I received an offer from IIAI.
2020-08 We have 1 paper (Person Image Generation) accepted to BMVC 2020, 2 papers (Semantic Image Synthesis + Unsupervised Gaze Correction and Animation) accepted to ACM MM 2020, and 1 paper (Controllable Image-to-Image Translation) accepted to TIP 2020.
2020-07 We have 1 paper (Person Image Generation) accepted to ECCV 2020.
2020-05 We have 1 paper (Deep Dictionary Learning and Coding) accepted to TNNLS 2020 and 1 paper (Semantic Segmentation of Remote Sensing Images) accepted to TGRS 2020.
2020-02 We have 1 paper (Semantic-Guided Scene Generation) accepted to CVPR 2020.
2019-07 We have 1 paper (Keypoint-Guided Image Generation) accepted to ACM MM 2019.
2019-05 🎉🎉🎉I received an offer from University of Oxford.
2019-02 We have 1 paper (Cross-View Image Translation) accepted to CVPR 2019.
2018-06 We have 1 paper (Hand Gesture-to-Gesture Translation) accepted to ACM MM 2018.
2018-02 We have 1 paper (Monocular Depth Estimation) accepted to CVPR 2018.
2016-07 We have 1 paper (Large Scale Image Retrieval) accepted to IJCAI 2016.
2015-08 We have 1 paper (Gender Classification) accepted to ACM MM 2015.

Research Lab

The goal of our research lab is to leverage AI to solve scientific problems. We are particularly focused on developing AI systems that can be applied to a wide range of scientific domains. Our interests include AIGC, AI4Science, computer vision, embodied AI, and large language models (LLMs). We believe that these tools, when properly harnessed, can drive breakthroughs in understanding complex systems and tackling real-world challenges through intelligent, adaptive, and data-driven approaches.

  • Xiaoyuan Wang (Visiting from CMU, USA🇺🇸)
  • Zhenyu Lu (Visiting from CMU, USA🇺🇸)
  • Wenbo Gou (Visiting from CMU, USA🇺🇸)
  • Jun Liu (Visiting from Northeastern University, USA🇺🇸)
  • Zihao Wang (Visiting from UPenn, USA🇺🇸)
  • Yao Gong (Visiting from UPenn, USA🇺🇸)
  • Junjie Zeng (Visiting from UMich, USA🇺🇸)
  • Xiaoyi Liu (Visiting from Washington University in St. Louis, USA🇺🇸)
  • Jingyi Wan (Visiting from University of Cambridge, UK🇬🇧)
  • Xuanyu Lai (Visiting from Imperial College London, UK🇬🇧)
  • Baohua Yin (Visiting from University of Sussex, UK🇬🇧)
  • Zeyu Zhang (Visiting from Australian National University, Australia🇦🇺)
  • Hongpeng Wang (Visiting from University of Sydney, Australia🇦🇺)
  • Pirzada Suhail (Visiting from IIT Bombay, India🇮🇳)
  • Zhixing Wang (Visiting from University of Malaya, Malaysia🇲🇾)
  • Nonghai Zhang (Intern from Peking University, China🇨🇳)
  • Chenyang Gu (Intern from Peking University, China🇨🇳)
  • Keyu Chen (Intern from Peking University, China🇨🇳)
  • Di Yu (Visiting from Tsinghua University, China🇨🇳)
  • Yuxuan Zhang (Visiting from Shanghai Jiao Tong University, China🇨🇳)
  • Renkai Wu (Visiting from Shanghai Jiao Tong University, China🇨🇳)
  • Yihua Shao (Visiting from University of Science and Technology Beijing, China🇨🇳)
  • Aoming Liang (Visiting from Westlake University, China🇨🇳)

Former members and visitors: Guillaume Thiry (Intern, now Software Engineer at Google), Sherwin Bahmani (Intern, now Ph.D. student at University of Toronto), Sanghwan Kim (Intern, now Ph.D. student at TUM), Alexandros Delitzas (Intern, now Ph.D. student at ETH Zürich and Max Planck Institute for Informatics), Jingfeng Rong (Intern, now Ph.D. student at Swiss Finance Institute), Yitong Xia (Intern from ETH Zürich), Boyan Duan (Intern, now Master at ETH Zürich), Baptiste Chopin (now Postdoc at INRIA), Xiaoyu Yi (Intern from Peking University)

Position Openings

Interested in joining our lab? Please find the application information below.

For prospective collaborators interested in AIGC and AI4Science, we have multiple positions available for Postdoc/Ph.D./Master/Intern researchers. We welcome applicants from diverse disciplines, including Computer Science、Medicine、Physics、Chemistry、Finance、Agriculture、Biology、Archeology、Meteorology、Geography、Architecture、and more. If you are interested in using AI to solve problems in your field, you are encouraged to apply. Please email me with your self-introduction, the project of interest (including the problem you are trying to solve and how you plan to solve it, being as specific as possible), your transcript, and CV. Send your application to haotang@pku.edu.cn. I might not be able to respond to all emails due to large volume.

For Ph.D./Master applicants, we have two Ph.D. openings for domestic students (in addition to two Ph.D./Master openings for foreign students) each year, please reach out at least one year prior to the application deadline. For visiting students or research interns, we welcome undergraduate and graduate students from all over the world to apply for >6 months research internship. Our interns have published many top-tier conference/journal papers (e.g., TPAMI、CVPR、NeurIPS) and have been admitted to Postdoc/Ph.D./Master programs in prestigious institutions such as MIT, Harvard, Google, University of Toronto, Caltech, ETH Zürich, NTU, NUS, and TUM, etc.

International Collaborations

Our lab maintains strong collaborative relationships with several leading international research institutions, including

  • USA🇺🇸: MIT, Harvard, Stanford University, CMU, Princeton University, UIUC, UMich, Northeastern University, University of Maryland, University of Texas at Austin, UC Irvine, University of Illinois at Chicago, Illinois Institute of Technology, University of Connecticut, Texas State University, University of Georgia, Clemson University, University of Oregon, College of William & Mary
  • Canada🇨🇦: University of Toronto, Simon Fraser University
  • Switzerland🇨🇭: ETH Zürich, EPFL
  • UK🇬🇧: University of Oxford, University of Cambridge, University of Leicester, University of Warwick
  • Italy🇮🇹: University of Trento, FBK, Politecnico di Milano, University of Modena e Reggio Emilia
  • Germany🇩🇪: TUM, University of Würzburg
  • France🇫🇷: INRIA, University of Lille
  • Finland🇫🇮: University of Oulu
  • Netherlands🇳🇱: TU Delft
  • Belgium🇧🇪: KU Leuven
  • Bulgaria🇧🇬: INSAIT
  • Singapore🇸🇬: NUS, NTU
  • Japan🇯🇵: University of Tokyo, National Institute of Informatics
  • South Korea🇰🇷: Sungkyunkwan University
  • Australia🇦🇺: University of Adelaide, ANU, Monash University, University of Technology Sydney
  • UAE🇦🇪: IIAI, MBZUAI
  • HongKong🇭🇰: University of Hong Kong, Hong Kong University of Science and Technology
I am deeply grateful for the opportunities to collaborate with such esteemed institutions and for the valuable contributions they have made to our joint research efforts. Additionally, we maintain long-term collaborations with industry, including Google, Meta, Amazon, Cisco, Western Digital, Mercedes-Benz, Xiaohongshu, Alibaba, Tencent, etc, aiming to translate cutting-edge research into practical applications and drive technological advancement.

Featured Publications

(Including CVPR, NeurIPS, TPAMI, ICML, ICLR, ICCV, ECCV, AAAI, IJCAI, ACM MM)

My Students or Interns, *Corresponding Author(s)

  1. DiffFNO: Diffusion Fourier Neural Operator
     Xiaoyi Liu,  Hao Tang*
    In Arxiv, 2024
  2. AllRestorer: All-in-One Transformer for Image Restoration under Composite Degradations
     Jiawei Mao,  Yu Yang,  Xuesong Yin,  Ling Shao,  Hao Tang*
    In Arxiv, 2024
  3. KMM: Key Frame Mask Mamba for Extended Motion Generation
     Zeyu Zhang,  Hang Gao,  Akide Liu,  Qi Chen,  Feng Chen,  Yiran Wang,  Danning Li,  Hao Tang*
    In Arxiv, 2024
  4. Towards Self-Supervised FG-SBIR with Unified Sample Feature Alignment and Multi-Scale Token Recycling
     Jianan Jiang,  Hao Tang,  Zhilin Jiang,  Weiren Yu,  Di Wu
    In Arxiv, 2024
  5. GWQ: Gradient-Aware Weight Quantization for Large Language Models
     Yihua Shao,  Siyu Liang,  Xiaolin Lin,  Zijian Ling,  Zixian Zhu,  Minxi Yan,  Haiyang Liu,  Siyu Chen,  Ziyang Yan,  Yilan Meng,  Chenyu Zhang,  Haotong Qin*,  Michele Magno,  Yang Yang,  Zhen Lei,  Yan Wang,  Jingcai Guo,  Ling Shao,  Hao Tang*
    In Arxiv, 2024
  6. M2M: Learning Controllable Multi of Experts and Multi-Scale Operators Are the Partial Differential Equations Need
     Aoming Liang,  Zhaoyang Mu,  Pengxiao Lin,  Cong Wang,  Mingming Ge,  Ling Shao,  Dixia Fan*,  Hao Tang*
    In Arxiv, 2024
  7. Toward Zero-Shot Learning for Visual Dehazing of Urological Surgical Robots
     Renkai Wu,  Xianjin Wang,  Pengchen Liang,  Zhenyu Zhang,  Qing Chang*,  Hao Tang*
    In Arxiv, 2024
  8. Barbie: Text to Barbie-Style 3D Avatars
     Xiaokun Sun,  Zhenyu Zhang,  Ying Tai,  Qian Wang,  Hao Tang,  Zili Yi,  Jian Yang
    In Arxiv, 2024
  9. InfiniMotion: Mamba Boosts Memory in Transformer for Arbitrary Long Motion Generation
     Zeyu Zhang,  Akide Liu,  Qi Chen,  Feng Chen,  Ian Reid,  Richard Hartley,  Bohan Zhuang,  Hao Tang*
    In Arxiv, 2024
  10. A Survey on Multimodal Wearable Sensor-based Human Action Recognition
     Jianyuan Ni,  Hao Tang,  Syed Tousiful Haque,  Yan Yan,  Anne HH Ngu
    In Arxiv, 2024
  11. Enlighten-Your-Voice: When Multimodal Meets Zero-shot Low-light Image Enhancement
     Xiaofeng Zhang,  Zishan Xu,  Hao Tang,  Chaochen Gu,  Wei Chen,  Shanying Zhu,  Xinping Guan
    In Arxiv, 2024
  12. MaskSAM: Towards Auto-prompt SAM with Mask Classification for Medical Image Segmentation
     Bin Xie,  Hao Tang,  Bin Duan,  Dawen Cai,  Yan Yan
    In Arxiv, 2024
  13. Efficient Pruning of Large Language Model with Adaptive Estimation Fusion
     Jun Liu,  Chao Wu,  Changdi Yang,  Hao Tang*,  Haoye Dong,  Zhenglun Kong,  Geng Yuan,  Wei Niu,  Dong Huang*,  Yanzhi Wang*
    In Arxiv, 2024
  14. StableGarment: Garment-Centric Generation via Stable Diffusion
     Rui Wang,  Hailong Guo,  Jiaming Liu,  Huaxia Li,  Haibo Zhao,  Xu Tang,  Yao Hu,  Hao Tang,  Peipei Li
    In Arxiv, 2024
  15. ConsistentAvatar: Learning to Diffuse Fully Consistent Talking Head Avatar with Temporal Guidance
     Haijie Yang,  Zhenyu Zhang,  Hao Tang,  Jianjun Qian,  Jian Yang
    In ACM MM 2024, Melbourne, Australia
  16. CoIn: A Lightweight and Effective Framework for Story Visualization and Continuation
     Ming Tao,  Bao Bingkun,  Hao Tang,  Yaowei Wang,  Changsheng Xu
    In ACM MM 2024, Melbourne, Australia
  17. Physical Adversarial Attack Neets Computer Vision: A Decade Survey
     Hui Wei,  Hao Tang,  Xuemei Jia,  Zhixiang Wang,  Hanxun Yu,  Zhubo Li,  Shin'ichi Satoh,  Luc Van Gool,  Zheng Wang
    IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024
  18. Motion Mamba: Efficient and Long Sequence Motion Generation with Hierarchical and Bidirectional Selective SSM
     Zeyu Zhang,  Akide Liu,  Ian Reid,  Richard Hartley,  Bohan Zhuang,  Hao Tang*
    In ECCV 2024, Milan, Italy
  19. 3D Weakly Supervised Semantic Segmentation with 2D Vision-Language Guidance
     Xiaoxu Xu,  Yitian Yuan,  Jinlong Li,  Qiudan Zhang,  Zequn Jie,  Lin Ma,  Hao Tang,  Nicu Sebe,  Xu Wang
    In ECCV 2024, Milan, Italy
  20. StoryImager: A Unified and Efficient Framework for Coherent Story Visualization and Completion
     Ming Tao,  Bingkun Bao,  Hao Tang,  Yaowei Wang,  Changsheng Xu
    In ECCV 2024, Milan, Italy
  21. InstructGIE: Towards Generalizable Image Editing
     Zichong Meng,  Changdi Yang,  Jun Liu,  Hao Tang*,  Pu Zhao*,  Yanzhi Wang*
    In ECCV 2024, Milan, Italy
  22. SCP-Diff: Photo-Realistic Semantic Image Synthesis with Spatial-Categorical Joint Prior
     Huan-ang Gao,  Mingju Gao,  Jiaju Li,  Wenyi Li,  Rong Zhi,  Hao Tang,  Hao Zhao
    In ECCV 2024, Milan, Italy
  23. Dataset Growth
     Ziheng Qin,  Zhaopan Xu,  Yukun Zhou,  Zangwei Zheng,  Zebang Cheng,  Hao Tang,  Lei Shang,  Baigui Sun,  Xiaojiang Peng,  Radu Timofte,  Hongxun Yao,  Kai Wang,  Yang You
    In ECCV 2024, Milan, Italy
  24. HandDiff: 3D Hand Pose Estimation with Diffusion on Image-Point Cloud
     Wencan Cheng,  Hao Tang,  Luc Van Gool,  Jong Hwan Ko
    In CVPR 2024, Seattle, USA
  25. Versatile Navigation under Partial Observability via Value-guided Diffusion Policy
     Gengyu Zhang,  Hao Tang,  Yan Yan
    In CVPR 2024, Seattle, USA
  26. Towards Robust 3D Pose Transfer with Adversarial Learning
     Haoyu Chen,  Hao Tang,  Ehsan Adeli,  Guoying Zhao
    In CVPR 2024, Seattle, USA
  27. On the Faithfulness of Vision Transformer Explanations
     Junyi Wu,  Weitai Kang,  Hao Tang,  Yuan Hong,  Yan Yan
    In CVPR 2024, Seattle, USA
  28. Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer
     Junyi Wu,  Bin Duan,  Weitai Kang,  Hao Tang,  Yan Yan
    In CVPR 2024, Seattle, USA
  29. SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation
     Yuxuan Zhang,  Jiaming Liu,  Yiren Song,  Rui Wang,  Hao Tang,  Jinpeng Yu,  Huaxia Li,  Xu Tang,  Yao Hu,  Han Pan,  Zhongliang Jing
    In CVPR 2024, Seattle, USA
  30. Towards Online Real-Time Memory-based Video Inpainting Transformers
     Guillaume Thiry,  Hao Tang*,  Radu Timofte,  Luc Van Gool
    In CVPR 2024, Seattle, USA
  31. G2P-DDM: Generating Sign Pose Sequence from Gloss Sequence with Discrete Diffusion Model
     Pan Xie,  Qipeng Zhang,  Peng Taiying,  Hao Tang*,  Yao Du,  Zexian Li
    In AAAI 2024, Vancouver, Canada
  32. HotBEV: Hardware-oriented Transformer-based Multi-View 3D Detector for BEV Perception
     Peiyan Dong,  Zhenglun Kong,  Xin Meng,  Pinrui Yu,  Yifan Gong,  Geng Yuan,  Hao Tang*, Yanzhi Wang
    In NeurIPS 2023, New Orleans, USA
  33. PackQViT: Faster Sub-8-bit Vision Transformers via Full and Packed Quantization on the Mobile
     Peiyan Dong,  Lei Lu,  Chao Wu,  Cheng Lyu,  Geng Yuan,  Hao Tang*, Yanzhi Wang
    In NeurIPS 2023, New Orleans, USA
  34. LART: Neural Correspondence Learning with Latent Regularization Transformer for 3D Motion Transfer
     Haoyu Chen,  Hao Tang,  Radu Timofte,  Luc Van Gool,  Guoying Zhao
    In NeurIPS 2023, New Orleans, USA
  35. Does Graph Distillation See Like Vision Dataset Counterpart?
     Beining Yang,  Kai Wang,  Qingyun Sun,  Cheng Ji,  Xingcheng Fu,  Hao Tang,  Yang You,  Jianxin Li
    In NeurIPS 2023, New Orleans, USA
  36. Practical Blind Image Denoising via Swin-Conv-UNet and Data Synthesis
     Kai Zhang,  Yawei Li,  Jingyun Liang,  Jiezhang Cao,  Yulun Zhang,  Hao Tang,  Dengping Fan,  Radu Timofte,  Luc Van Gool
    Springer Machine Intelligence Research (MIR), 2023
  37. Learning Concordant Attention via Target-aware Alignment for Visible-Infrared Person Re-identification
     Jianbing Wu,  Hong Liu,  Yuxin Su,  Wei Shi,  Hao Tang
    In ICCV 2023, Paris, France
  38. SpeedDETR: Speed-aware Transformers for End-to-end Object Detection
     Peiyan Dong,  Zhenglun Kong,  Xin Meng,  Peng Zhang,  Hao Tang*,  Yanzhi Wang,  Chih-Hsien Chou
    In ICML 2023, Hawaii, USA
  39. Data Level Lottery Ticket Hypothesis for Vision Transformers
     Xuan Shen,  Zhenglun Kong,  Minghai Qin,  Peiyan Dong,  Geng Yuan,  Xin Meng,  Hao Tang,  Xiaolong Ma,  Yanzhi Wang
    In IJCAI 2023, Macao, China
  40. Graph Transformer GANs for Graph-Constrained House Generation
     Hao Tang,  Zhenyu Zhang,  Humphrey Shi,  Bo Li,  Ling Shao,  Nicu Sebe,  Radu Timofte,  Luc Van Gool
    In CVPR 2023, Vancouver, Canada
  41. Unsupervised Deep Probabilistic Approach for Partial Point Cloud Registration
     Guofeng Mei,  Hao Tang,  Xiaoshui Huang,  Weijie Wang,  Juan Liu,  Jian Zhang,  Luc Van Gool,  Qiang Wu
    In CVPR 2023, Vancouver, Canada
  42. DeepMAD: Mathematical Architecture Design for Deep Convolutional Neural Network
     Xuan Shen,  Yaohua Wang,  Ming Lin,  Yilun Huang,  Hao Tang,  Xiuyu Sun,  Yanzhi Wang
    In CVPR 2023, Vancouver, Canada
  43. GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis
     Ming Tao,  Bingkun Bao,  Hao Tang,  Changsheng Xu
    In CVPR 2023, Vancouver, Canada
  44. Edge Guided GANs with Contrastive Learning for Semantic Image Synthesis
     Hao Tang,  Xiaojuan Qi,  Guolei Sun,  Dan Xu,  Nicu Sebe,  Radu Timofte,  Luc Van Gool
    In ICLR 2023, Kigali, Rwanda
  45. 3D-Aware Semantic-Guided Generative Model for Human Synthesis
     Jichao Zhang,  Enver Sangineto,  Hao Tang,  Aliaksandr Siarohin,  Zhun Zhong,  Nicu Sebe,  Wei Wang
    In ECCV 2022, Tel Aviv, Israel
  46. Towards Interpretable Video Super-Resolution via Alternative Optimization
     Jiezhang Cao,  Jingyun Liang,  Kai Zhang,  Wenguan Wang,  Qin Wang,  Yulun Zhang,  Hao Tang,  Luc Van Gool
    In ECCV 2022, Tel Aviv, Israel
  47. MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation
     Wenhao Li,  Hong Liu,  Hao Tang,  Pichao Wang,  Luc Van Gool
    In CVPR 2022, New Orleans, USA
  48. DF-GAN: A Simple and Effective Baseline for Text-to-Image Synthesis
     Ming Tao,  Hao Tang,  Fei Wu,  Xiaoyuan Jing,  Bingkun Bao,  Changsheng Xu
    In CVPR 2022, New Orleans, USA
  49. GestureGAN for Hand Gesture-to-Gesture Translation in the Wild
     Hao Tang,  Wei Wang,  Dan Xu,  Yan Yan,  Nicu Sebe
    In ACM MM 2018, Seoul, South Korea