Integrating Visual Information across Camera Movements with a Visual-Motor Calibration Map

Peter N. Prokopowicz, Paul R. Cooper

Facing the competing demands for wider field of view and higher spatial resolution, computer vision will evolve toward greater use of foveal sensors and frequent camera movements. Integration of visual information across movements becomes a fundamental problem. We show that integration is possible using a biologically-inspired representation we call the visual-motor calibration map. The map is a memory-based model of the relationship between camera movements and corresponding pixel locations before and after any movement. The map constitutes a self-calibration that can compensate for non-uniform sampling, lens distortion, mechanical misalignments, and arbitrary pixel reordering. Integration takes place entirely in a retinotopic frame, using a short-term, predictive visual memory.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.