Visual attention mechanisms allow humans to extract relevant and important information from raw input percepts. Many applications in robotics and computer vision have modeled human visual attention mechanisms using a bottom-up data centric approach. In contrast, recent studies in cognitive science highlight advantages of a top-down approach to the attention mechanisms, especially in applications involving goal-directed search. In this paper, we propose a top-down approach for extracting salient objects/regions of space. The top-down methodology first isolates different objects in an unorganized point cloud, and compares each object for uniqueness. A measure of saliency using the properties of geodesic distance on the object’s surface is defined. Our method works on 3D point cloud data, and identifies salient objects of high curvature and unique silhouette. These being the most unique features of a scene, are robust to clutter, occlusions and view point changes. We provide the details of the proposed method and initial experimental results.
 T. N. Vikram, M. Tscherepanow and B. Wrede, “A Saliency Map Based on Sampling an Image Into Random Rectangular Regions of Interest,” Pattern Recognition, Vol. 45, No. 9, Sep. 2013, pp. 3114-3124. doi:10.1016/j.patcog.2012.02.009
 S. Frintrop, E. Rome and H. I. Christensen, “Computational Visual Attention Systems and Their Cognitive Foundations,” ACM Transactions on Applied Perception, vol. 7, no. 1, Jan. 2010, pp. 1-39. doi:10.1145/1658349.1658355
 N. Riche, M. Mancas, B. Gosselin and T. Dutoit, “3D Saliency for Abnormal Motion Selection: The Role of the Depth Map,” in J. Crowley, B. Draper, and M. Thonnat, Eds. Computer Vision Systems, Springer Berlin/Heidel- berg, 2011, pp. 143-152. doi:10.1007/978-3-642-23968-7_15
 O. Akman and P. Jonker, “Computing Saliency Map from Spatial Information in Point Cloud Data,” Advanced Concepts for Intelligent Vision Systems, Vol. 6474, 2010, pp. 290-299. doi:10.1007/978-3-642-17688-3_28
 J. Stückler and S. Behnke, “Interest Point Detection in Depth Images Through Scale-Space Surface Analysis,” In 2011 IEEE International Conference on Robotics and Automation (ICRA), 2011, pp. 3568-3574. doi:10.1109/ICRA.2011.5980474
 L. Itti, C. Koch and E. Niebur, “A Model of Saliency Based Visual Attention for Rapid Scene Analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 11, 1998, pp. 1254-1259. doi:10.1109/34.730558
 B. Steder and R. Rusu, “NARF: 3D Range Image Features for Object Recognition”, In Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010.
 R. Unnikrishnan and M. Hebert, “Multi-Scale Interest Regions From Unorganized Point Clouds”, In IEEE Computer Society Conference, Computer Vision and Pattern Recognition Workshops, 2008, pp. 1-8.
 A. Flint, A. Dick and A. v. d. Hengel, “Thrift: Local 3D Structure Recognition,” In Society on Digital Image Computing Techniques and Applications, 9th Biennial Conference of the Australian Pattern Recognition, 2007, pp. 182-188.
 Microsoft kinect sensor, http://www.xbox.com/en-au/kinect?xr=shellnav," 2013.
 E. Fix and J. L. Hodges, “Discriminatory analysis. Non-parametric discrimination: Consistency properties,” International Statistical Review/Revue Internationale de Statistique, 1989, pp. 238-247.