Arash Vahdat is a principal scientist and research manager, leading the fundamental generative AI research team at NVIDIA Research. Before joining NVIDIA, he was a research scientist at D-Wave Systems where he worked on generative learning and its applications in label-efficient training. Before D-Wave, Arash was a research faculty member at Simon Fraser University (SFU), where he led deep learning-based video analysis research and taught master courses on machine learning for big data. Arash’s current area of research is focused on generative learning with applications in multimodal training (vision+X), representation learning, 3D generation, and AI for science.
E-mail:
LinkedIn
Google Scholar
Twitter
Research Interests
- Deep generative learning: diffusion models, variational autoencoders, energy-based models
- Applications: image/graph/text/3D synthesis, controllable generation, label-efficient learning
- Efficient neural networks
- Representation learning
Students and Interns
Throughout my career, I was fortunate to have the opportunity of mentoring bright students and interns. As I built the fundamental generative AI research team at NVIDIA, many more students were hosted in the team. These students include:
- Seul Lee, Korea Advanced Institute of Science & Technology (KAIST)
- Omri Avrahami, Hebrew University of Jerusalem
- Ka-Hei (Edward) Hui, Chinese University of Hong Kong
- Giannis Daras, University of Texas at Austin
- Dejia Xu, University of Texas at Austin
- Yilun Xu*, Massachusetts Institute of Technology (MIT)
- Jae Hyun Lim, Mila, Université de Montréal
- James Thornton*, University of Oxford
- Jiarui Xu*, University of California, San Diego
- Ajay Jain*, University of California, Berkeley
- Guan-Horng Liu*, Georgia Institute of Technology
- Yuji Roh*, Korea Advanced Institute of Science and Technology
- Yuntian Deng*, Harvard University
- Paul Micaelli*, University of Edinburgh
- Xiaohui Zeng*, University of Toronto
- Tim Dockhorn*, University of Waterloo
- Zhisheng Xiao, University of Chicago
- Divyansh Garg*, Stanford University
- Zhen Dong*, University of California at Berkeley
- Jyoti Aneja, University of Illinois Urbana-Champaign
- Wenling (Wendy) Shang*, University of Amsterdam
- Tanmay Gupta, University of Illinois Urbana-Champaign
- Mehran Khodabandeh, Simon Fraser University
- Mostafa S. Ibrahim, Simon Fraser University
- Soumali Roychowdhury, IMT School of Advanced Studies
- Zhiwei Deng, Simon Fraser University
* I served as a co-mentor.
Recent Invited Talks
- Panelist at the NeurIPS 2023 Workshop on Diffusion Models, December 2023.
- Denoising Diffusion Models: The New Generative Learning Big Bang, Deep Generative Modeling Course At Princeton University, November 2023.
- Generative AI in the Modern Era: A Visual Odyssey, Berkeley Artificial Intelligence Research (BAIR) Tech Talk, September 2023.
- Generative AI in Practice: A Visual Odyssey , distinguished alumni talk at the 50th Anniversary of CS@Simon Fraser University, September 2023.
- Panelist at the ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling, July 2023.
- Denoising Diffusion Models: Generative Models of Modern Deep Learning Era, Machine Learning and Science Forum, Berkeley Institute for Data Science, April 2023.
- Denoising Diffusion Models: Generative Models of Modern Deep Learning Era, Department of Computing, Imperial College London, March 2023.
- Denoising Diffusion Models: Generative Models of Modern Deep Learning Era, Department of Computer Science, Stanford University, March 2023.
- Denoising Diffusion Models: the Generative Learning Champion of the 2020s, hosted by Soheil Feizi, Computer Science Department, University of Maryland, College Park (UMD), Nov 2022.
- Tackling the Generative Learning Trilemma with Accelerated Diffusion Models, hosted by Ying Nian Wu at the center for vision, cognition, learning, and autonomy, University of California, Los Angeles, Feb 2022.
- Tackling the Generative Learning Trilemma with Accelerated Diffusion Models, computer vision group, University of Bern, Feb 2022.
- Tackling the Generative Learning Trilemma with Accelerated Diffusion Models, hosted by Rosanne Liu at ML Collective, Feb 2022.
- New Frontiers in Deep Generative Learning, Open Data Science Conference, Nov 2021.
- Hybrid Hierarchical Generative Models for Image Synthesis, hosted by Mohammad Norouzi at Google Brain Toronto, Dec 2020.
- Deep Hierarchical Variational Autoencoder for Image Synthesis, hosted by Amir Khash Ahmadi at Autodesk AI Lab, Nov 2020.
- Deep Hierarchical Variational Autoencoder for Image Synthesis, hosted by Juan Felipe Carrasquilla at the Vector Institute, Oct 2020.
- NVAE: A Deep Hierarchical Variational Autoencoder, hosted by Danilo Rezende at DeepMind, Sept 2020.
- On Continuous Relaxation of Discrete Latent Variables, hosted by Stefano Ermon, Department of Computer Science, Stanford University, Nov 2019.
Workshops and Tutorials
- NeurIPS tutorial on Latent Diffusion Models: Is the Generative AI Revolution Happening in Latent Space?, Neural Information Processing Systems (NeurIPS), 2023 [website]
- CVPR tutorial on Denoising Diffusion Models: A Generative Learning Big Bang, Computer Vision and Pattern Recognition (CVPR), 2023 [website]
- NeurIPS workshop on Score-Based Methods, Neural Information Processing Systems (NeurIPS), 2022 [website]
- CVPR tutorial on Denoising Diffusion-based Generative Modeling: Foundations and Applications, Computer Vision and Pattern Recognition (CVPR), 2022 [website]
- ECCV tutorial on New Frontiers for Learning with Limited Labels or Data, European Conference on Computer Vision (ECCV), 2020 [website]
Services
- Area Chair:
- NeurIPS (2021, 2022, 2024)
- ICML (2023)
- ICLR (2021, 2022, 2023, 2024)
- Reviewer:
- NeurIPS (2017, 2019, 2020)
- ICML (2018, 2020)
- CVPR (2015, 2018, 2019, 2021, 2022)
- ICCV (2015)
- ECCV (2014)
- PAMI (2011, 2013, 2015)
- SIGGRAPH (2022)
- Pattern Recognition (2015)