Katie Everett bio photo

Google Scholar Twitter Github LinkedIn

Katie Everett

I am a Research Engineer at Google DeepMind where I work on understanding and improving training dynamics in large-scale neural networks. My work focuses on scaling laws, parameterization, and high-dimensional optimization. I am also interested in reasoning and the role of discrete compositional structures in representations.

I am also a PhD Candidate in Electrical Engineering and Computer Science at MIT advised by Leslie Kaelbling. I completed my M.Eng. in electrical engineering and computer science and my B.S. with a double major in computer science and mathematics, both at MIT.

Publications

  1. Logarithmic-time Schedules for Scaling Language Models with Momentum
    Damien Ferbach, Courtney Paquette, Gauthier Gidel, Katie Everett, and Elliot Paquette
    Preprint (2026)
  2. Dimension-adapted Momentum Outscales SGD
    Damien Ferbach, Katie Everett, Gauthier Gidel, Elliot Paquette, and Courtney Paquette
    Neural Information Processing Systems (NeurIPS) 2025Spotlight
  3. Scaling Exponents Across Parameterizations and Optimizers
    Katie Everett, Lechao Xiao, Mitchell Wortsman, Alexander A. Alemi, Roman Novak, Peter J. Liu, Izzeddin Gur, Jascha Sohl-Dickstein, Leslie Pack Kaelbling, Jaehoon Lee, and Jeffrey Pennington
    International Conference on Machine Learning (ICML) 2024
  4. Small-scale proxies for large-scale Transformer training instabilities
    Mitchell Wortsman, Peter J. Liu, Lechao Xiao, Katie Everett, Alex Alemi, Ben Adlam, John D. Co-Reyes, Izzeddin Gur, Abhishek Kumar, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein, Kelvin Xu, Jaehoon Lee, Justin Gilmer, and Simon Kornblith
    International Conference on Learning Representations (ICLR) 2024Oral Presentation
  5. Nonparametric partial disentanglement via mechanism sparsity: Sparse actions, interventions and sparse temporal dependencies
    Sébastien Lachapelle, Pau Rodríguez López, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, and Simon Lacoste-Julien
    Preprint (2024)
  6. GFlowNet-EM for learning compositional latent variable models
    Edward J. Hu, Nikolay Malkin, Moksh Jain, Katie Everett, Alexandros Graikos, and Yoshua Bengio
    International Conference on Machine Learning (ICML) 2023
  7. GFlowNets and variational inference
    Nikolay Malkin, Salem Lahlou, Tristan Deleu, Xu Ji, Edward Hu, Katie Everett, Dinghuai Zhang, and Yoshua Bengio International Conference on Learning Representations (ICLR) 2023
  8. Disentanglement via Mechanism Sparsity Regularization: A New Principle for Nonlinear ICA
    Sébastien Lachapelle, Pau Rodríguez López, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, and Simon Lacoste-Julien
    Conference on Causal Learning and Reasoning (CLeaR) 2022
  9. Google COVID-19 search trends symptoms dataset
    Shailesh Bavadekar, Andrew Dai, John Davis, Damien Desfontaines, Ilya Eckstein, Katie Everett, Alex Fabrikant, Gerardo Flores, Evgeniy Gabrilovich, Krishna Gadepalli, Shane Glass, Rayman Huang, Chaitanya Kamath, Dennis Kraft, Akim Kumok, Hinali Marfatia, Yael Mayer, Benjamin Miller, Adam Pearce, Irippuge Milinda Perera, Venky Ramachandran, Karthik Raman, Thomas Roessler, Izhak Shafran, Tomer Shekel, Charlotte Stanton, Jacob Stimes, Mimi Sun, Gregory Wellenius, and Masrour Zoghi
    Preprint (2020)
  10. Cycles in Causal Learning
    Katie Everett and Ian Fischer
    ICLR Workshop on Causal Learning for Decision Making 2020