Deep Learning: Classics and Trends

DLCT group img

Coming Up

Date Presenter Topic or Paper
2021.06.25 Josh Roy Visual Transfer for Reinforcement Learning via Wasserstein Domain Confusion
2021.07.09 Aviral Kumar Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning
2021.07.16 Evgenii Nikishin Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation
2021.07.23 Srishti Yadav Fair Attribute Classification through Latent Space De-biasing
2021.07.30 Jaehoon Lee Dataset Meta-Learning from Kernel Ridge-Regression
2021.08.06 Emilien Dupont Generative Models as Distributions of Functions
2021.08.13 Kale-ab Tessera Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization
2021.08.20 Jonathan Ho Several papers about diffusion models
2021.09.10 Preetum Nakkiran Distributional Generalization: A New Kind of Generalization

Deep Learning: Classics and Trends (DLCT) is a paper reading group run by Rosanne since 2018. Read more about its genesis story, underlying philosophy and random trivia on its original site.

We do not record talks, to create a safe, cozy space that allows “stupid questions”, which we found to be the best way to connect and learn.

Subscribe with your email to receive a weekly reminder from Rosanne containing the Zoom link to join the meeting.

Nominate a speaker, a paper, or just share what you think. Self-nominations are welcome and encouraged!

Past Events

Date Presenter Topic or Paper
2021.06.18 Ben Mildenhall Neural Volumetric Rendering: How NeRF Works [1] [2]
2021.06.11 Preetum Nakkiran The Deep Bootstrap: Good Online Learners are Good Offline Generalizers [Slides]
2021.06.04 Sharon Zhou Evaluating the Disentanglement of Deep Generative Models through Manifold Topology [Slides]
2021.05.21 Richard Song Closing the Sim-To-Real Gap with Evolutionary Meta-Learning [Slides]
2021.05.14 Rosanne Liu A few papers I saw at ICLR 2021 [Slides]
2021.04.30 Angjoo Kanazawa Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image [Slides]
2021.04.23 Corentin Tallec Bootstrap your own latent: A new approach to self-supervised Learning [Slides]
2021.04.16 Pieter-Jan Hoedt, Frederik Kratzert MC-LSTM: Adding mass conservation to RNNs [Slides]
2021.04.09 Lilian Weng Asymmetric self-play for automatic goal discovery in robotic manipulation [Slides]
2021.04.02 Hady Elhasar, Muhammad Khalifa, Marc Dymetman A Distributional Approach to Controlled Text Generation [Slides]
2021.03.26 Xinlei Chen Exploring Simple Siamese Representation Learning and Beyond [Slides]
2021.03.19 Jay J. Thiagarajan Improving Reliability and Generalization of Deep Models via Prediction Calibration [1] [2] [3] [Slides]
2021.03.12 Rohan Anil Scalable Second Order Optimization for Deep Learning [Slides]
2021.03.05 Rishabh Agarwal Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning [Slides]
2021.02.26 Robin Tibor Schirrmeister, Polina Kirichenko Understanding Semantic Anomaly Detection with Deep Convolutional Generative Networks [1] [2] [Slides]
2021.02.19 Krzysztof Choromanski Rethinking Attention with Performers - Towards New Transformers' Revolution [Slides]
2021.02.12 Yang Song Generative Modeling by Estimating Gradients of the Data Distribution [1] [2] [3] [Slides]
2021.02.05 Rosanne Liu Unconventional ways of training neural networks and what they teach us about model capacity [Slides]
2021.01.29 Johannes Brandstetter Hopfield Networks is All You Need [Slides]
2021.01.22 Andrey Malinin Uncertainty Estimation with Prior Networks [1] [2] [3] [4] [Slides]
2021.01.15 Jong Wook Kim Learning Transferable Visual Models From Natural Language Supervision [Slides]
2021.01.08 Liyuan Liu Understanding the Difficulty of Training Transformers [Slides]
2020.12.18 Julien Cornebise AI for Good and Ethics-Washing: a Self-Defense Primer [Slides]
2020.12.04 Rishabh Agarwal How I Learned To Stop Worrying And Love Offline RL [Slides]
2020.11.20 Sachit Menon PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models [Slides]
2020.11.13 Luke Metz Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves [Slides]
2020.11.06 Karl Cobbe Phasic Policy Gradient [Slides]
2020.10.30 Angelos Katharopoulos Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention [Slides]
2020.10.23 Ryan Lowe Learning to Summarize with Human Feedback [Slides]
2020.10.16 Jason Lee Latent Variable Models and Iterative Refinement for Non-Autoregressive Neural Machine Translation [1] [2] [3] [Slides]
2020.10.09 Utku Evci Difficulty of Sparse Training and RigL [1] [2]
2020.10.02 Shrimai Prabhumoye Controllable Text Generation: Should machines reflect the way humans interact in society? [1] [2] [3] [Slides]
2020.09.25 Sidak Pal Singh Model Fusion via Optimal Transport [Slides]
2020.09.18 Katherine Ye Penrose: From Mathematical Notation to Beautiful Diagrams [Slides]
2020.09.11 Jesse Mu Compositional Explanations of Neurons [Slides]
2020.09.04 Yian Yin Unequal effects of the COVID-19 pandemic on scientists
2020.08.28 Arianna Ornaghi Gender Attitudes in the Judiciary: Evidence from U.S. Circuit Courts [Slides]
2020.08.21 Anna Goldie, Azalia Mirhoseini Chip Placement with Deep Reinforcement Learning [Paper]
2020.08.14 Zhongqi Miao Deep Learning and Realistic Datasets [1] [2] [Slides]
2020.07.31 Dan Hendrycks Out-of-distribution robustness in computer vision and NLP [1] [2] [Slides]
2020.07.24 Ben Mann Language Models are Few-Shot Learners [Slides]


More past events reside permanently on the original site.