|Title:||Hybrid Approach for Orientation-Estimation of Rotating Humans in Video Frames Acquired by Stationary Monocular Camera|
Zwettler, Gerald A.
|Citation:||WSCG 2020: full papers proceedings: 28th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, p. 39-47.|
|Publisher:||Václav Skala - UNION Agency|
|Keywords:||sledování objektů;orientace-odhad;optický tok;narovnání obrazu;extrakce lidské kostry;odhad lidské pozice;homogenita pixelů|
|Keywords in different language:||object tracking;orientation-estimation;optical flow;image alignment;human skeleton extraction;human pose estimation;pixel homogeneity|
|Abstract in different language:||The precise orientation-estimation of humans relative to the pose of a monocular camera system is a challenging task due to the general aspects of camera calibration and the deformable nature of a human body in motion. Thus, novel approaches of Deep Learning for precise object pose-estimation in robotics are hard to adapt to human body analysis. In this work, a hybrid approach for the accurate estimation of a human body rotation relative to a camera system is presented, thereby significantly improving results derived from poseNet by applying analysis of optical flow in a frame to frame comparison. The human body in-place rotating in T-pose is thereby aligned in the center, applying object tracking methods to compensate for translations of the body movement. After 2D skeleton extraction, the optical flow is calculated for a region of interest (ROI) area aligned relative to the vertical skeleton joint representing the spine and compared frame by frame. To evaluate the eligibility of the clothing as a fundament for good feature, the local pixel homogeneity is taken into consideration to restrict the optical flow to heterogeneous regions with distinctive features like imprint patterns, buttons or buckles besides local illumination changes. Based on the mean optical flow with a coarse approximation of the axial body shape as ellipsis, an accuracy between 0.1° and 2.0° by a target rotation of 10° for orientation-estimation is achieved on a frame-toframe comparison evaluated and validated on both, Computer Generated Imagery (CGI) renderings and real-world videos of people wearing clothing of varying feature appropriateness.|
|Rights:||© Václav Skala - UNION Agency|
|Appears in Collections:||WSCG 2020: Full Papers Proceedings|
Please use this identifier to cite or link to this item:
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.