About

I earned my PhD from NYU’s Center for Data Science, where I was advised by Yann LeCun. My research focuses on self-supervised learning methods for extracting meaningful data representations. In particular, I develop regularization techniques that prevent collapse during model pre-training, ensuring that the learned representations remain informative and useful.

Beyond my doctoral work, I am excited about advancing modern AI, including large language models and multi-modal generative systems. Leveraging my research experience, I aim to evaluate and build more robust and scalable AI systems that generalize effectively across diverse tasks.

Selected Publications

Video Representation Learning with Joint-Embedding Predictive Architectures. Drozdov, K., Shwartz-Ziv, R. and LeCun, Y. Preprint, 2024. PDF

Representation Learning With Regularized Energy-Based Models. Drozdov, K. Thesis, 2024. HTML

Variance-Covariance Regularization Improves Representation Learning. Zhu, J., Evtimova, K., Chen, Y., Shwartz-Ziv, R. and LeCun, Y. Preprint, 2023. PDF

Sparse Coding with Multi-Layer Decoders using Variance Regularization. Evtimova, K. and LeCun, Y. Published in Transactions on Machine Learning Research, Aug 2022. PDF CODE

Emergent Communication in a Multi-Modal, Multi-Step Referential Game. Evtimova, K., Drozdov, A., Kiela, D. and Cho, K. Accepted as a poster at ICLR 2018. PDF CODE

News

  • December 2024: Check out my new preprint on video representation learning with joint-embedding predictive architectures!
  • October 2024: My research on emergent communication with adaptive compute at inference time was featured in the CDS Blog: “From Academia to Industry: How a 2018 Paper Foreshadowed OpenAI’s Latest Innovation”.
  • September 2024: I’ve been invited to serve as a reviewer for the prestigious TMLR journal.
  • More news...
  • July 2024: Excited to share that I defended my thesis titled “Representation Learning with Regularized Energy-Based Models”!
  • February 2024: Serving as a reviewer at ICML 2024.
  • November 2023: Serving as a reviewer at AISTATS 2024.
  • October 2023: Serving as a reviewer at ICLR 2024.
  • March 2023: Serving as a reviewer at ICML 2023.
  • January 2023: Gave an invited talk on self-supervised learning at Prof. Leif Weatherby’s course “Theory of the Digital”.
  • November 2022: I’ll be at NeurIPS 2022. Would love to chat about self-supervised learning, regularization, latent variable models, etc. If you’re also around, do reach out!
  • August 2022: “Sparse Coding with Multi-Layer Decoders using Variance Regularization” is now published at TMLR!
  • April 2022: Selected as one of the Highlighted Reviewers at ICLR 2022.
  • April 2022: Serving as a reviewer at ICML 2022.
  • October 2021: Serving as a reviewer at ICLR 2022.
  • September 2021: Excited to be an organizer of the 2022 NYU AI School.
  • July 2021: Serving as a reviewer at NeurIPS 2021.
  • March 2021: Serving as a reviewer at ICML 2021.
  • February 2021: Serving as a reviewer at the Energy-based Models workshop at ICLR 2021.
  • April 2020: Happy to share that I passed my Depth Qualification Exam on the topic of “Energy-Based Learning & Regularized Latent Variable Models” with committee members Joan Bruna, Kyunghyun Cho, and Yann LeCun.
  • January 2020: Gave a talk at the CILVR seminar titled “Self-supervised Learning & Sparse Overcomplete Representations of Visual Data”.
  • January 2020: Looking forward to being a teaching assistant for Introduction to Machine Learning at Courant over the spring.
  • February 2019: Excited to share that I’ll be interning at FAIR this summer.
  • January 2019: Happy to be a section leader for NYU’s Deep Learning class this spring.
  • Contact

    ✉️ kve216 at nyu.edu