## Research: Parameter-Free Machine Learning

I focus on a very simple but often ignored topic: how to make machine learning algorithms really automatic? How to remove the constant need for a human (PhD student?) to hand tune all the knobs of an algorithm till it finally works?

Most of the time you find theoretical papers that discuss about "optimal tuning", or applications papers presenting models with 5-6 parameters, and in both cases no procedure on how to tune these parameters is presented.

I try to go beyond the usual "grad student descent" approach of the practictioners and the "doubling trick" of the theoreticians. I design theoretical algorithms that are natively parameter-free and are also useful in the real world.

In the following there is a sample of my contributions in this line of research.

Parameter-Free Backpropagation

- F. Orabona and T. Tommasi. Backprop without Learning Rates Through Coin Betting, 2017 [PDF] [CODE]

Parameter-Free Linear Classification and Regression

- F. Orabona and D. Pal. Coin Betting and Parameter-Free Online Learning. NIPS 2016 [PDF]
- F. Orabona, and D. Pal. Scale-Free Algorithms for Online Linear Optimization. ALT 2015 [PDF]

Parameter-Free Non-Parametric Classification and Regression

Parameter-Free Transfer Learning

- T. Tommasi, F. Orabona, and B. Caputo. Learning Categories from Few Examples with Multi Model Knowledge Transfer. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014 [PDF]
- I. Kuzborskij, and F. Orabona. Stability and Hypothesis Transfer Learning. ICML 2013 [PDF] [Errata]