Title: Feature extraction and localisation using scale-invariant feature transform on 2.5D image
Authors: SukTing, Pui
Minoi, Jacey-Lynn
Lim, Terrin
Oliveira, João Fradinho
Gillies, Duncan Fyfe
Citation: WSCG 2014: communication papers proceedings: 22nd International Conference in Central Europeon Computer Graphics, Visualization and Computer Visionin co-operation with EUROGRAPHICS Association, p. 179-187.
Issue Date: 2014
Publisher: Václav Skala - UNION Agency
Document type: konferenční příspěvek
conferenceObject
URI: wscg.zcu.cz/WSCG2014/!!_2014-WSCG-Communication.pdf
http://hdl.handle.net/11025/26413
ISBN: 978-80-86943-71-8
Keywords: extrakce vlastností;lokalizace;orientační bod;Otsův algoritmus
Keywords in different language: feature extraction;localization;landmark;Otsu’s algorithm
Abstract in different language: The standard starting point for the extraction of information from human face image data is the detection of key anatomical landmarks, which is a vital initial stage for several applications, such as face recognition, facial analysis and synthesis. Locating facial landmarks in images is an important task in image processing and detecting it automatically still remains challenging. The appearance of facial landmarks may vary tremendously due to facial variations. Detecting and extracting landmarks from raw face data is usually done manually by trained and experienced scientists or clinicians, and the landmarking is a laborious process. Hence, we aim to develop methods to automate as much as possible the process of landmarking facial features. In this paper, we present and discuss our new automatic landmarking method on face data using 2.5-dimensional (2.5D) range images. We applied the Scale-invariant Feature Transform (SIFT) method to extract feature vectors and the Otsu’s method to obtain a general threshold value for landmark localisation. We have also developed an interactive tool to ease the visualisation of the overall landmarking process. The interactive visualization tool has a function which allows users to adjust and explore the threshold values for further analysis, thus enabling one to determine the threshold values for the detection and extraction of important keypoints or/and regions of facial features that are suitable to be used later automatically with new datasets with the same controlled lighting and pose restrictions. We measured the accuracy of the automatic landmarking versus manual landmarking and found the differences to be marginal. This paper describes our own implementation of the SIFT and Otsu’s algorithms, analyzes the results of the landmark detection, and highlights future work.
Rights: @ Václav Skala - UNION Agency
Appears in Collections:WSCG 2014: Communication Papers Proceedings

Files in This Item:
File Description SizeFormat 
Sukting.pdfPlný text1,7 MBAdobe PDFView/Open


Please use this identifier to cite or link to this item: http://hdl.handle.net/11025/26413

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.