About
I earned my PhD from NYU’s Center for Data Science, where I was advised by Yann LeCun. My research focuses on self-supervised learning methods for extracting meaningful data representations. In particular, I develop regularization techniques that prevent collapse during model pre-training, ensuring that the learned representations remain informative and useful.
Beyond my doctoral work, I am excited about advancing modern AI, including large language models and multi-modal generative systems. Leveraging my research experience, I aim to evaluate and build more robust and scalable AI systems that generalize effectively across diverse tasks.
Selected Publications
Video Representation Learning with Joint-Embedding Predictive Architectures. Drozdov, K., Shwartz-Ziv, R. and LeCun, Y. Preprint, 2024. PDF
Representation Learning With Regularized Energy-Based Models. Drozdov, K. Thesis, 2024. HTML
Variance-Covariance Regularization Improves Representation Learning. Zhu, J., Evtimova, K., Chen, Y., Shwartz-Ziv, R. and LeCun, Y. Preprint, 2023. PDF
Sparse Coding with Multi-Layer Decoders using Variance Regularization. Evtimova, K. and LeCun, Y. Published in Transactions on Machine Learning Research, Aug 2022. PDF CODE
Emergent Communication in a Multi-Modal, Multi-Step Referential Game. Evtimova, K., Drozdov, A., Kiela, D. and Cho, K. Accepted as a poster at ICLR 2018. PDF CODE
News
More news...
Contact
✉️ kve216 at nyu.edu