Title: Human action recognition based on 3D convolution neural networks from RGBD videos
Authors: Al-Akam, Rawya
Paulus, Dietrich
Gharabaghi, Darius
Citation: WSCG 2018: poster papers proceedings: 26th International Conference in Central Europe on Computer Graphics, Visualization and Computer Visionin co-operation with EUROGRAPHICS Association, p. 18-26.
Issue Date: 2018
Publisher: Václav Skala - UNION Agency
Document type: konferenční příspěvek
conferenceObject
URI: wscg.zcu.cz/WSCG2018/!!_CSRN-2803.pdf
http://hdl.handle.net/11025/34633
ISBN: 978-80-86943-42-8
ISSN: 2464-4617
Keywords: rozpoznání akce;RGBD videa;optický tok;3D konvoluční neuronová síť;podpora vektorového stroje
Keywords in different language: action recognition;RGBD videos;optical flow;3D convolutional neural network;support vector machines
Abstract: Human action recognition with color and depth sensors has received increasing attention in image processing and computer vision. This paper target is to develop a novel deep model for recognizing human action from the fusion of RGB-D videos based on a Convolutional Neural Network. This work is proposed a novel 3D Convolutional Neural Network architecture that implicitly captures motion information between adjacent frames, which are represented in two main steps: As a First, the optical flow is used to extract motion information from spatio-temporal domains of the different RGB-D video actions. This information is used to compute the features vector values from deep 3D CNN model. Secondly, train and evaluate a 3D CNN from three channels of the input video sequences (i.e. RGB, depth and combining information from both channels (RGB-D)) to obtain a feature representation for a 3D CNN model. For evaluating the accuracy results, a Convolutional Neural Network based on different data channels are trained and additionally the possibilities of feature extraction from 3D Convolutional Neural Network and the features are examined by support vector machine to improve and recognize human actions. From this methods, we demonstrate that the test results from RGB-D channels better than the results from each channel trained separately by baseline Convolutional Neural Network and outperform the state of the art on the same public datasets.
Rights: © Václav Skala - Union Agency
Appears in Collections:WSCG 2018: Poster Papers Proceedings

Files in This Item:
File Description SizeFormat 
Al-Akam.pdfPlný text1,3 MBAdobe PDFView/Open


Please use this identifier to cite or link to this item: http://hdl.handle.net/11025/34633

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.