Inter-Layer Learning Towards Emergent Cooperative Behavior

As applications for artificially intelligent agents increase in complexity, we can no longer rely on clever heuristics and hand-tuned behaviors to develop their programming. Even the interaction between various components cannot be reduced to simple rules, as the complexities of realistic dynamic environments become unwieldy to characterize manually. To cope with these challenges, we propose an architecture for inter-layer learning consisting of three tiers: basic skills, individual strategy, and team strategy, each of which can be constructed using machine learning techniques, incorporating the skills developed in the previous layer. Using RoboCup soccer as a testbed, we demonstrate the potential of this architecture for the development of effective, cooperative, multi-agent systems. First, individual basic skills are developed and refined in isolation through neural networks and reinforcement learning techniques, and then, the interaction between these skills at higher layers is also learned. Finally, reinforcement learning is applied to emergent cooperative behaviour between the teammates. The inter-layer learning architecture provides an explicit learning model for deciding individual and cooperative tactics in a dynamic environment and proved to be promising in real-time competition.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.