Boosting Theory Towards Practice: Recent Developments in Decision Tree Induction and the Weak Learning Framework

Michael Kearns

One of the original goals of computational learning theory was that of formulating models that permit meaningful comparisons between the different machine learning heuristics that are used in practice [Kearns et aI., 19871. Despite the other successes of computational learning theory, this goal has proven elusive. Empirically successful machine learning algorithms such as 64.5 and the backpropagation algorithm for neural networks have not met the criteria of the well-known Probably Approximately Correct (PAC) model [Valiant, 19841 and its variants, and thus such models are of little use in drawing distinctions among the heuristics used in applications. Conversely, the algorithms suggested by computational learning theory are usually too limited in various ways to find wide application.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.