Deep Learning: Classics and Trends

DLCT group img

Coming Up

Date Presenter Topic or Paper
2021.01.29 Johannes Brandstetter Hopfield Networks is All You Need
2021.02.05 Sam Greydaynus Scaling Laws for Autoregressive Generative Modeling
2021.02.12 Robin Tibor Schirrmeister Understanding Anomaly Detection with Deep Invertible Networks through Hierarchies of Distributions and Features
2021.02.19 Krzysztof Choromanski Rethinking Attention with Performers
2021.02.26 Yang Song Score-Based Generative Modeling through Stochastic Differential Equations
2021.03.05 Rishabh Agarwal Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning
2021.03.12 Rohan Anil Towards Practical Second Order Optimization for Deep Learning
2021.03.19 Jay J. Thiagarajan TBD
2021.03.26 Xinlei Chen Exploring Simple Siamese Representation Learning
2021.04.02 Hady Elhasar, Muhammad Khalifa, Marc Dymetman A Distributional Approach to Controlled Text Generation
2021.04.09 Lilian Weng Asymmetric self-play for automatic goal discovery in robotic manipulation
2021.04.16 Frederik Kratzert, Pieter-Jan Hoedt MC-LSTM: Mass-conserving LSTM

The Story

“A super influential reading group that has achieved cult-like status.” —John Sears

Deep Learning: Classics and Trends (DLCT) is a paper reading group run by Rosanne since 2018. It started within Uber AI Labs, with the support of Zoubin, Ken, and Jason, and the help of many, when we felt the need of a space to sample the overwhelmingly large amount of papers, and to hold free-form, judgemental (JK) discussions; or as Piero puts it, to “ask a million questions”.

Since then, it has grown much larger, first opened up to the broader machine learning community in Uber, then to the general public in 2019. In March 2020, in light of COVID-19, we hold all meetings virtually, making it radically accessible to anyone from anywhere. Starting June 2020, DLCT operates under ML Collective, with a mission of making researchers more connected.

You will see most MLC members here weekly, as well as our long time friends, supporters, and potential collaborators from the broader community.

  1. Time: Every Friday, 12pm - 1pm Pacific Time
  2. Place: Virtually on Zoom (up to 100 participants)
  3. Format: Presentation based. An invited speaker would talk about a paper with slides, a lot of the time themselves being the author of the paper.
  4. Scope: Deep learning, old (a.k.a. “let’s revisit the 2014 GAN paper”) and new (a.k.a. “look at this blog post from yesterday”).
  5. Join the email group to receive announcements of the upcoming talk. I only email once a week; so I'd recommend subscribing to "Every new message".
  6. Nominate a speaker, a paper, or just tell me what you think. Self-nominations are welcome and encouraged!

Past Events

Date Presenter Topic or Paper
2021.01.22 Andrey Malinin Uncertainty Estimation with Prior Networks [1] [2] [3] [4]
2021.01.15 Jong Wook Kim Learning Transferable Visual Models From Natural Language Supervision [Slides]
2021.01.08 Liyuan Liu Understanding the Difficulty of Training Transformers [Slides]
2020.12.18 Julien Cornebise AI for Good and Ethics-Washing: a Self-Defense Primer [Slides]
2020.12.04 Rishabh Agarwal How I Learned To Stop Worrying And Love Offline RL [Slides]
2020.11.20 Sachit Menon PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models [Slides]
2020.11.13 Luke Metz Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves [Slides]
2020.11.06 Karl Cobbe Phasic Policy Gradient [Slides]
2020.10.30 Angelos Katharopoulos Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention [Slides]
2020.10.23 Ryan Lowe Learning to Summarize with Human Feedback [Slides]
2020.10.16 Jason Lee Latent Variable Models and Iterative Refinement for Non-Autoregressive Neural Machine Translation [1] [2] [3] [Slides]
2020.10.09 Utku Evci Difficulty of Sparse Training and RigL [1] [2]
2020.10.02 Shrimai Prabhumoye Controllable Text Generation: Should machines reflect the way humans interact in society? [1] [2] [3] [Slides]
2020.09.25 Sidak Pal Singh Model Fusion via Optimal Transport [Slides]
2020.09.18 Katherine Ye Penrose: From Mathematical Notation to Beautiful Diagrams [Slides]
2020.09.11 Jesse Mu Compositional Explanations of Neurons [Slides]
2020.09.04 Yian Yin Unequal effects of the COVID-19 pandemic on scientists
2020.08.28 Arianna Ornaghi Gender Attitudes in the Judiciary: Evidence from U.S. Circuit Courts [Slides]
2020.08.21 Anna Goldie, Azalia Mirhoseini Chip Placement with Deep Reinforcement Learning [Paper]
2020.08.14 Zhongqi Miao Deep Learning and Realistic Datasets [1] [2] [Slides]
2020.07.31 Dan Hendrycks Out-of-distribution robustness in computer vision and NLP [1] [2] [Slides]
2020.07.24 Ben Mann Language Models are Few-Shot Learners [Slides]

...

Slowly transporting from the old site...