KeywordsFaculty of Science\Computer Science
Full recordShow full item record
AbstractPredicting the future is an important purpose of machine learning research. In online learning, predictions are given sequentially rather than all at once. People wish to make sensible decisions sequentially in many situations of everyday life, whether month-by-month, day-by-day, or minute-by-minute. In competitive prediction, the predictions are made by a set of experts and by a learner. The quality of the predictions is measured by a loss function. The goal of the learner is to make reliable predictions under any circumstances. The learner compares his loss with the loss of the best experts from the set and ensures that his performance is not much worse. In this thesis a general methodology is described to provide algorithms with strong performance guarantees for many prediction problems. Specific attention is paid to the square loss function, widely used to assess the quality of predictions. Four types of the sets of experts are considered in this thesis: sets with finite number of free experts (which are not required to follow any strategy), sets of experts following strategies from finite-dimensional spaces, sets of experts following strategies from infinite-dimensional Hilbert spaces, and sets of experts following strategies from infinite-dimensional Banach spaces. The power of the methodology is illustrated in the derivations of various prediction algorithms. Two core approaches are explored in this thesis: the Aggregating Algorithm and Defensive Forecasting. These approaches are close to each other in many interesting cases. However, Defensive Forecasting is more general and covers some problems which cannot be solved using the Aggregating Algorithm. The Aggregating Algorithm is more specific and is often more computationally efficient. The empirical performance and properties of new algorithms are validated on artificial or real world data sets. Specific areas where the algorithms can be applied are emphasized.