Reasons for Beliefs in Understanding: Applications of Non-Monotonic Dependencies to Story Processing

Paul O'Rorke

Many of the inferences and decisions which contribute to understanding involve fallible assumptions. When these assumptions are under-mined, computational models of comprehension should respond rationally. This paper crossbreeds AI research on problem solving and understanding to produce a hybrid model ("reasoned understand-ing"). In particular, the paper shows how non-monotonic dependencies [Doyle79] enable a schema-based story processor to adjust to new information requiring the retraction of assumptions.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.