Francesco Orabona


Research: Parameter-Free Machine Learning


I focus on a very simple but often ignored topic: how to make machine learning algorithms really automatic? How to remove the constant need for a human (PhD student?) to hand tune all the knobs of an algorithm till it finally works?

Most of the time you find theoretical papers that discuss about "optimal tuning", or applications papers presenting models with 5-6 parameters, and in both cases no procedure on how to tune these parameters is presented.

I try to go beyond the usual "grad student descent" approach of the practictioners and the "doubling trick" of the theoreticians. I design theoretical algorithms that are natively parameter-free and are also useful in the real world.

In the following there is a sample of my contributions in this line of research.



Parameter-Free Linear Classification and Regression
  • F. Orabona and D. Pal. Coin Betting and Parameter-Free Online Learning. To be presented in NIPS 2016 [PDF]
  • F. Orabona, and D. Pal. Scale-Free Algorithms for Online Linear Optimization. ALT 2015 [PDF]


Parameter-Free Non-Parametric Classification and Regression
  • F. Orabona. Simultaneous Model Selection and Optimization through Parameter-free Stochastic Learning. Advances in Neural Information Processing Systems (NIPS), 2014 [PDF (version with proofs)]
  • S. Kpotufe, and F. Orabona. Regression-tree Tuning in a Streaming Setting. In Advances in Neural Information Processing Systems (NIPS), 2013 [PDF (version with proofs)] [CODE (linear version)]


Parameter-Free Transfer Learning
  • T. Tommasi, F. Orabona, and B. Caputo. Learning Categories from Few Examples with Multi Model Knowledge Transfer. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014 [PDF]
  • I. Kuzborskij, and F. Orabona. Stability and Hypothesis Transfer Learning. In Proc. of the International Conference on Machine Learning (ICML), Atlanta, GA, June 2013 [PDF] [Errata]