Arash Vahdat

Arash Vahdat is a principal research scientist at NVIDIA research specializing in generative AI technologies. Before joining NVIDIA, he was a research scientist at D-Wave Systems where he worked on generative learning and its applications in label-efficient training. Before D-Wave, Arash was a research faculty member at Simon Fraser University (SFU), where he led deep learning-based video analysis research and taught master courses on machine learning for big data. Arash’s current areas of research include generative learning, representation learning, and efficient deep learning.

E-mail:
LinkedIn
Google Scholar
Twitter

Research Interests

  • Deep generative learning: diffusion models, variational autoencoders, energy-based models
  • Applications: image/graph/text/3D synthesis, controllable generation, weakly supervised learning
  • Efficient neural networks
  • Representation learning

Students and Interns

Throughout my career, I was fortunate to have the opportunity of mentoring bright students and interns including:

* I served as a co-mentor.

Recent Invited Talks

  • Denoising Diffusion Models: Generative Models of Modern Deep Learning Era, Machine Learning and Science Forum, Berkeley Institute for Data Science, April 2023.
  • Denoising Diffusion Models: Generative Models of Modern Deep Learning Era, Department of Computing, Imperial College London, March 2023.
  • Denoising Diffusion Models: Generative Models of Modern Deep Learning Era, Department of Computer Science, Stanford University, March 2023.
  • Denoising Diffusion Models: the Generative Learning Champion of the 2020s, hosted by Soheil Feizi, Computer Science Department, University of Maryland, College Park (UMD), Nov 2022.
  • Tackling the Generative Learning Trilemma with Accelerated Diffusion Models, hosted by Ying Nian Wu at the center for vision, cognition, learning, and autonomy, University of California, Los Angeles, Feb 2022.
  • Tackling the Generative Learning Trilemma with Accelerated Diffusion Models, computer vision group, University of Bern, Feb 2022.
  • Tackling the Generative Learning Trilemma with Accelerated Diffusion Models, hosted by Rosanne Liu at ML Collective, Feb 2022.
  • New Frontiers in Deep Generative Learning, Open Data Science Conference, Nov 2021.
  • Hybrid Hierarchical Generative Models for Image Synthesis, hosted by Mohammad Norouzi at Google Brain Toronto, Dec 2020.
  • Deep Hierarchical Variational Autoencoder for Image Synthesis, hosted by Amir Khash Ahmadi at Autodesk AI Lab, Nov 2020.
  • Deep Hierarchical Variational Autoencoder for Image Synthesis, hosted by Juan Felipe Carrasquilla at the Vector Institute, Oct 2020.
  • NVAE: A Deep Hierarchical Variational Autoencoder, hosted by Danilo Rezende at DeepMind, Sept 2020.
  • On Continuous Relaxation of Discrete Latent Variables, hosted by Stefano Ermon, Department of Computer Science, Stanford University, Nov 2019.

Workshops and Tutorials

  • CVPR tutorial on Denoising Diffusion Models: A Generative Learning Big Bang, Computer Vision and Pattern Recognition (CVPR), 2023 [Coming Soon]
  • NeurIPS workshop on Score-Based Methods, Neural Information Processing Systems (NeurIPS), 2022 [website]
  • CVPR tutorial on Denoising Diffusion-based Generative Modeling: Foundations and Applications, Computer Vision and Pattern Recognition (CVPR), 2022 [website]
  • ECCV tutorial on New Frontiers for Learning with Limited Labels or Data, European Conference on Computer Vision (ECCV), 2020 [website]

Services

  • Area Chair:
    • NeurIPS (2021, 2022)
    • ICML (2023)
    • ICLR (2021, 2022, 2023)
  •  Reviewer:
    • NeurIPS (2017, 2019, 2020)
    • ICML (2018, 2020)
    • CVPR (2015, 2018, 2019, 2021, 2022)
    • ICCV (2015)
    • ECCV (2014)
    • PAMI (2011, 2013, 2015)
    • SIGGRAPH (2022)
    • Pattern Recognition (2015)