Learning Context Sensitive Shape Similarity by

Graph Transduction

   This page describes our system published in Xiang Bai, Xingwei Yang, Longin Jan Latecki, Wenyu Liu, Zhuowen Tu. Learning Context Sensitive Shape Similarity by Graph Transduction.
IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI) 2009.

    Our approach can improve any shape similarity measure. In our experimental results we improved inner distance shape context (IDSC). Our retrieval rate on MPEG-7 Core Experiment CE-Shape-1 test set is 91.61%
The MPEG-7 Core Experiment CE-Shape-1 test set consists of 70 classes of shapes each having 20 members. Using this data set, there have been many shape matching methods in which the retrieval/matching rate has been substantially increased. The methods known to us and their performance are listed on RETRIEVAL RESULTS. 

 

Shape Matching/Retrieval


INTRODUCTION

    Shape matching/retrieval is a very critical problem in computer vision. There are many different kinds of shape matching methods, and the progress in improving the matching rate has been substantial in recent years. However, nearly all of these approaches are focused on pair-wise shape similarity measure. It seems to be an obvious statement that the more similar two shapes are, the smaller is their difference, which is measured by some distance function. Yet, this statement ignores the fact that some differences are more relevant while other differences are less relevant for shape similarity. It is not yet clear how the biological vision systems perform shape matching; it is clear though that shape matching involves the high-level understanding of shapes. In particular, shapes in the same class can differ significantly because of in-class
variation, distortion or non-rigid transformation. In other words, even if two shapes belong to the same class, the distance between them may be very large if the distance measure cannot capture the intrinsic property of the shape.

    The Figure below illustrates that the proposed approach improves the performance of IDSC. A very interesting case is shown in the first row, where forIDSC only one result is correct for the query octopus. It instead retrieves nine apples as the most similar shapes. Since the query shape of the octopus is occluded, IDSC ranks it as more similar to an apple than to the octopus. In addition, since IDSC is invariant to rotation, it confuses the tentacles with the apple stem. Even in the case of only one correct shape, the proposed method learns that the difference between the apple stem is very relevant, although the tentacles of the octopuses exhibit a significant variation in shape. We restate that this is possible because the new learned distances are induced by geodesic paths in the shape manifold spanned by the known shapes. Consequently, the learned distances retrieve nine correct shapes. The only wrong results is the elephant, where the nose and legs are similar to the tentacles of the octopus.

    In this figure: The first column shows the query shape. The remaining 10 columns show the most similar shapes retrieved by IDSC (odd row numbers) and by our method (even row numbers).


Designed by: Richard Ralph