Preliminary II Examination, Apr 30, 2008, 02:30PM – 04:30PM, Wachman 447
Techniques for Extracting Contours and Merging Maps
Nagesh Adluru
Committee:
Longin Jan Latecki (Chair)
Rolf Lakamper
Slobodan Vucetic
Understanding machine vision can certainly improve our understanding of artificial intelligence as vision happens to be one of the basic intellectual activities of living beings. Since the notion of computation unifies the concept of a machine, computer vision can be understood as an application of modern approaches for achieving artificial intelligence, like machine learning and cognitive psychology. Computer vision mainly involves processing of different types of sensor data resulting in “perception of machines”. Tools from image processing, shape analysis and probabilistic inferences i.e. learning theory form the current artillery for computer vision researchers. Perception of machines plays a very important role in several artificial intelligence applications with sensors. There are numerous practical situations where we acquire sensor data for e.g. from mobile robots, security cameras, service and recreational robots. Making sense of this sensor data is very important so that we have increased automation in using the data.
In my thesis I will address some of the most annoying issues of two important open problems viz. object recognition and autonomous navigation that remain central in robotic, or in other words computational, intelligence. These problems are concerned with inducing computers, the abilities to recognize and navigate similar to those of humans. Object boundaries are very useful descriptors for recognizing objects. Extracting boundaries from real images has been a notoriously open problem for several decades in the vision community. In the first part I will present novel techniques for extracting object boundaries. The techniques are based on practically successful state-of-the-art Bayesian filtering framework, well founded geometric properties relating boundaries and skeletons and robust high-level shape analyses.
Acquiring global maps of the environments is crucial for robots to localize and be able to navigate autonomously. Though there has been a lot of progress in achieving autonomous mobility, for e.g. as in DARPA grand-challenges of 2005 and 2007, the mapping problem itself remains to be unsolved which is essential for robust autonomy in hard cases like rescue arenas and collaborative exploration. In the second part I will present techniques for merging maps acquired by multiple and single robots. We developed physics-based energy minimization techniques and also shape based techniques for scalable alignment of maps. Our shape based techniques are a product of combining of high-level vision techniques that exploit similarities among maps and strong statistical methods that can handle uncertainties in Bayesian sense.