ML Collective is an independent, nonprofit organization that conducts fundamental machine learning research and provides opportunities for collaboration and mentorship.

We provide a "research home" to unaffiliated and independent researchers, help underrepresented, and those on non-traditional paths get into AI research. We rely on established researchers who care about the societal impact of science, the wellbeing and fairness of the community, and are willing to give back, broaden and diversify their collaboration circle.

At ML Collective, we believe research opportunities should be accessible and free, and that open collaboration is the key to further democratizing AI research.

Find us on Twitter.

News

  1. [Dec 2020] Sam Greydanus scales down deep learning with the release of MNIST-1D (ArXiv, Blog, Code, Colab).
  2. [Nov 2020] We are hosting the NeurIPS 2020 Social on Open Collaboration! Stop by on Tuesday, Dec 8!
  3. [Nov 2020] Ever wondered how gradient ascent can help optimize the shape of a wing? Read "The Story of Airplane Wings" (blog, arXiv) by Sam Greydanus.
  4. [Sep 2020] Congratulations to Piero Molino and team on publishing "Controllable Text Generation with Focused Variation", to appear at Findings of ACL: EMNLP 2020.
  5. [Sep 2020] Congrats to Sara Hooker and Chirag Agarwal, and the rest of the organizing team on launching the Trustworthy ML Initiative!
  6. [Aug 2020] Congratulations to Sam Greydanus on publishing Self-classifying MNIST Digits on Distill!

Projects

Project img

Supermasks in Superposition

Mitchell Wortsman, Vivek Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, Jason Yosinski, Ali Farhadi

Published at NeurIPS 2020

Project img

Estimating Q(s,s’) with Deep Deterministic Dynamics Gradients

Ashley D. Edwards, Himanshu Sahni, Rosanne Liu, Jane Hung, Ankit Jain, Rui Wang, Adrien Ecoffet, Thomas Miconi, Charles Isbell, Jason Yosinski

Published at ICML 2020

Project img

Plug and Play Language Models: a Simple Approach to Controlled Text Generation

Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, Rosanne Liu

Published at ICLR 2020

Project img

LCA: Loss Change Allocation for Neural Network Training

Janice Lan, Rosanne Liu, Hattie Zhou, Jason Yosinski

Published at NeurIPS 2019

Project img

Hamiltonian Neural Networks

Sam Greydanus, Misko Dzamba, and Jason Yosinski

Published at NeurIPS 2019

Project img

Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

Hattie Zhou, Janice Lan, Rosanne Liu, Jason Yosinski

Published at NeurIPS 2019

Project img

Metropolis-Hastings Generative Adversarial Networks

Ryan Turner, Jane Hung, Eric Frank, Yunus Saatci, and Jason Yosinski

Published at ICML 2019

Project img

Faster Neural Networks Straight from JPEG

Lionel Gueguen, Alex Sergeev, Ben Kadlec, Rosanne Liu, Jason Yosinski

Published at NeurIPS 2018

Project img

An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution

Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, Jason Yosinski

Published at NeurIPS 2018

Project img

Measuring the intrinsic dimension of objective landscapes

Chunyuan Li, Heerad Farkhoor, Rosanne Liu, Jason Yosinski

Published at ICLR 2018

Members

Member photo

Rosanne Liu

Member photo

Andrea Madotto

Member photo

Zach Nussbaum

Member photo

Daniel D'souza

Member photo

Yaroslav Bulatov

Member photo

Marcos Pereira

Member photo

Chirag Agarwal

Member photo

Chloe Hsu

Member photo

Jonathan Frankle

Member photo

Sebastian Ruder

Member photo

Stephanie Sher

Member photo

Sara Hooker

Member photo

Ankit Jain

Member photo

Jane Hung

Member photo

Thomas Miconi

Member photo

Eric Frank

Member photo

Yariv Sadan

Member photo

Sam Greydanus

Member photo

Rui Wang

Member photo

Janice Lan

Member photo

Hattie Zhou

Member photo

Xinyu Hu

Member photo

Piero Molino

Member photo

Niel Teng Hu

Member photo

Mitchell Wortsman

Member photo

Jason Yosinski