I am looking for 1-2 post-docs and 1-2 PhD students to work on practical and theoretical aspects of online learning, stochastic optimization, and training of LLMs. The ideal candidate has an exceptional mathematical background and is proficient in coding. If you are interested, send me an email with your CV. Please send me your transcript too if your are applying for a PhD position.
Due to the volume of emails and my limited time, I might not answer to everyone: please do not take it personally, you were probably not a good match for my lab.
My current research interest is parameter-free machine learning. In particular I am interested in online learning, batch/stochastic optimization, and statistical learning theory.
NEW! A paper on better-than-KL PAC-Bayes bounds by Ilja Kuzborskij, Kwang-Sung Jun, Yulian Wu, Kyoungseok Jang, and me has been accepted at COLT 2024.
NEW! A paper on training without depth limits using Batch Normalization Without Gradient Explosion by Alexandru Meterez, Amir Joudaki, me, Alexander Immer, Gunnar Rätsch, and Hadi Daneshmand has been accepted at ICLR 2024.
NEW! A paper on tight concentrations and confidence sequences from the regret of universal portfolio by me and Kwang-Sung Jun has been accepted at IEEE Transactions on Information Theory.