Deep Learning: Classics and Trends

DLCT group img

Coming Up

Date Presenter Topic or Paper
2020.10.30 Angelos Katharopoulos Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
2020.11.06 Karl Cobbe Phasic Policy Gradient
2020.11.13 Luke Metz Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves
2020.11.20 Sachit Menon PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

The Story

“A super influential reading group that has achieved cult-like status.” —John Sears

Deep Learning: Classics and Trends (DLCT) is a paper reading group run by Rosanne since 2018. It started within Uber AI Labs, with the support of Zoubin, Ken, and Jason, and the help of many, when we felt the need of a space to sample the overwhelmingly large amount of papers, and to hold free-form, judgemental (JK) discussions; or as Piero puts it, to “ask a million questions”.

Since then, it has grown much larger, first opened up to the broader machine learning community in Uber, then to the general public in 2019. In March 2020, in light of COVID-19, we hold all meetings virtually, making it radically accessible to anyone from anywhere. Starting June 2020, DLCT operates under ML Collective, with a mission of making researchers more connected.

You will see most MLC members here weekly, as well as our long time friends, supporters, and potential collaborators from the broader community.

  1. Time: Every Friday, 12pm - 1pm Pacific Time
  2. Place: Virtually on Zoom (up to 100 participants)
  3. Format: Presentation based. An invited speaker would talk about a paper with slides, a lot of the time themselves being the author of the paper.
  4. Scope: Deep learning, old (a.k.a. “let’s revisit the 2014 GAN paper”) and new (a.k.a. “look at this blog post from yesterday”).
  5. Join the email group to receive announcements of the upcoming talk. I only email once a week; so I'd recommend subscribing to "Every new message".
  6. Nominate a speaker, a paper, or just tell me what you think.

If you are an ML researcher and have a paper that you are proud of: tell us (besides posting on Twitter)! This could be yet another platform for feedbacks and engagements with a community of 800+ students, scientists and ML engineers and enthusiasts.

Past Events

Date Presenter Topic or Paper
2020.10.23 Ryan Lowe Learning to Summarize with Human Feedback
2020.10.16 Jason Lee Latent Variable Models and Iterative Refinement for Non-Autoregressive Neural Machine Translation ["discrete"; EMNLP'18] ["hybrid"; AAAI'20] ["continuous"; EMNLP'20]
2020.10.09 Utku Evci Difficulty of Sparse Training and RigL [1] [2]
2020.10.02 Shrimai Prabhumoye Controllable Text Generation: Should machines reflect the way humans interact in society? [1] [2] [3] [Slides]
2020.09.25 Sidak Pal Singh Model Fusion via Optimal Transport [Slides]
2020.09.18 Katherine Ye Penrose: From Mathematical Notation to Beautiful Diagrams [Slides]
2020.09.11 Jesse Mu Compositional Explanations of Neurons [Slides]
2020.09.04 Yian Yin Unequal effects of the COVID-19 pandemic on scientists
2020.08.28 Arianna Ornaghi Gender Attitudes in the Judiciary: Evidence from U.S. Circuit Courts [Slides]
2020.08.21 Anna Goldie, Azalia Mirhoseini Chip Placement with Deep Reinforcement Learning [Paper]
2020.08.14 Zhongqi Miao Deep Learning and Realistic Datasets [1] [2] [Slides]
2020.07.31 Dan Hendrycks Out-of-distribution robustness in computer vision and NLP [1] [2] [Slides]
2020.07.24 Ben Mann Language Models are Few-Shot Learners [Slides]


Slowly transporting from the old site...