Learning and Planning in Markov Processes — Advances and Challenges
Papers from the AAAI Workshop
Daniela Pucci de Farias, Shie Mannor, Doina Precup, and Georgios Theocharous, Program Cochairs
Technical Report WS-04-08
118 pp., $30.00
ISBN 978-1-57735-209-9
[Add to Cart] [View Cart]
A popular approach to artificial intelligence involves modeling an agent’s interaction with the environment through actions, observations, and rewards. Intelligent agents choose actions after every observation, aiming to maximize long-term reward. Markov decision processes (MDPs) are a widely adopted paradigm for modeling an agent’s interaction with its environment. This workshop aims to bring together a wide spectrum of researchers, from people involved with theoretical MDP research, to implementers concerned with real world problems. The workshop reported on theoretical advances in this field; identify the challenges faced by practitioners; and direct the community toward solving problems that are relevant to practice, yet may be computationally tractable.