Learning from an Approximate Theory and Noisy Examples

Somkiat Tangkitvanich, Masamichi Shimura

This paper presents an approach to a new learning problem, the problem of learning from an approximate theory and a set of noisy examples. This problem requires a new learning approach since it cannot be satisfactorily solved by either indictive, or analytic learning algorithms or their existing combinations. Our approach can be viewed as an extension of the minimum description length (MDL) principle, and is unique in that it is based on the encoding of the refinement required to transform the given theory into a better theory rather than on the encoding of the resultant theory as in traditional MDL. Experimental results show that, based on our approach, the theory learned from an approximate theory and a set, of noisy examples is more accurate than either the approximate theory itself or a theory learned from the examples alone. This suggests that our approach can combine useful information from both the theory and the training set even though both of them are only partially correct.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.