AbstractOnline learning is discussed from the viewpoint of Bayesian statistical inference. By replacing the true posterior distribution with a simpler parametric distribution, one can define an online algorithm by a repetition of two steps: An update of the approximate posterior, when a new example arrives, and an optimal projection into the parametric family. Choosing this family to be Gaussian, we show that the algorithm achieves asymptotic efficiency. An application to learning in single layer neural networks is given.
Opper, Manfred and Winther, Ole (1999). A Bayesian approach to on-line learning. IN: On-line learning in neural networks. Saad, David (ed.) Publications of the Newton Institute . Cambridge: Cambridge University Press.