Integration of Low and Mid Level Spatial Cognition Processes for Robot Perception

Stay connected



Share on facebook
Share on twitter
Share on linkedin

CIS Colloquium, Oct 17, 2007, 03:30PM – 04:30PM, TECH Center 111

Integration of Low and Mid Level Spatial Cognition Processes for Robot Perception

Prof. Rolf Lakaemper, Temple University

To increase the mobility of autonomous robots to a level of applicability in real world environments, novel methods in robot perception are required to master tasks of spatial cognition, such as object recognition or mapping. Research on human spatial cognition suggests the help of ‘visual mental imagery’ enabling humans to understand their environment. Visual mental images can be described as virtual objects, or expected images resembling certain aspects of the experience of actually perceiving sensor data. The project addresses the technical implementation of principles of visual mental imagery on the level of Mid Level Spatial Cognition (MLSC), focusing on tasks related to visual robot perception. In contrast to Low Level Spatial Cognition (LLSC), which mainly bases on basic local features of spatial data, MLSC relates to mid level concepts, for example the concept of shape. Like humans add visual mental images to the low level sensor information to be able to infer higher level information, we propose to integrate virtual MLSC objects with visual LLSC processing from sensor data in robots. To achieve this goal, a feedback system is created: with respect to shape, the MLSC modules analyze the LLSC-pre-processed data to generate virtual objects. These virtual objects are offered back to the LLSC modules as hypothetical sensor data. The LLSC modules in turn evaluate the virtual data and adjust their interpretation of sensor data accordingly. The system is applied to an important field in the research of autonomous robots: multi robot mapping of Search and Rescue Robots in disaster environments.