Relating Things and Stuff via Object Property Interactions |
Graphical illustration of our Augmented CRF (ACRF) model. Please see Figure 2 in the TPAMI paper for detail. |
In the last few years, substantially different approaches have been adopted for segmenting and detecting “things” (object categories that have a well defined shape such as people and cars) and “stuff” (object categories which have an amorphous spatial extent such as grass and sky). While things have been typically detected by sliding window or Hough transform based methods, detection of stuff is generally formulated as a pixel or segment-wise classification problem. This paper proposes a framework for scene understanding that models both things and stuff using a common representation but still preserves their distinct nature by using a property list. This representation allows us to enforce sophisticated geometric and semantic relationships between thing and stuff categories in a single graphical model. We use the latest advances made in the field of discrete optimization to efficiently perform maximum a posteriori (MAP) inference in this model. We evaluate our method on the Stanford dataset by comparing it against state-of-the-art methods for object segmentation and detection. We also show that our method achieves competitive performances on the challenging PASCAL’09 segmentation dataset. Video
Publications
Acknowledgements We acknowledge the support of the Gigascale Systems Research Center and NSF CPS grant #0931474. References
Contact : sunmin at umich dot edu and bsookim at umich dot edu Last update : September 4, 2012
|