Human Comprehensible Machine Learning
Papers from the AAAI Workshop
Dan Oblinger, Chair
Technical Report WS-05-04
44 pp., $25.00
ISBN 978-1-57735-240-2
[Add to Cart] [View Cart]
Humans need to trust that intelligent systems are behaving correctly, and one way to achieve such trust is to enable people to understand the inputs, outputs, and algorithms used as well as any new knowledge acquired through learning. As the use of machine learning increases in critical operations it is being applied increasingly in domains where the learning system's inputs and outputs must be understood, or even modified, by human operators.
For instance, e-mail classification systems may need to gain the user's trust by explaining their predictions in a language the user can understand. Intelligent office assistants learn from a user's preferences and behavior, but in order to be useful, the user must trust that agent will make the same decisions the human would under the same conditions. Machine learning has also been widely used to support credit approval decisions; yet banks are becoming increasingly responsible for explaining the reasons behind a denial of credit. Autonomic systems are beginning to employ machine learning to support common administrative policies; yet system administrators are reluctant to trust automated technology they do not understand.
This workshop explored issues of human comprehensibility as it relates to machine learning.