Full metadata record
DC poleHodnotaJazyk
dc.contributor.authorArmagan, Anil
dc.contributor.authorGarcia-Hernando, Guillermo
dc.contributor.authorBaek, Seungryul
dc.contributor.authorHampali, Shreyas
dc.contributor.authorRad, Mahdi
dc.contributor.authorZhang, Zhaohui
dc.contributor.authorXie, Shipeng
dc.contributor.authorChen, MingXiu
dc.contributor.authorZhang, Boshen
dc.contributor.authorXiong, Fu
dc.contributor.authorYang, Xiao
dc.contributor.authorCao, Zhiguo
dc.contributor.authorYuan, Junsong
dc.contributor.authorRen, Pengfei
dc.contributor.authorHuang, Weiting
dc.contributor.authorSun, Haifeng
dc.contributor.authorHrúz, Marek
dc.contributor.authorKanis, Jakub
dc.contributor.authorKrňoul, Zdeněk
dc.contributor.authorWan, Qingfu
dc.contributor.authorLi, Shile
dc.contributor.authorYang, Linlin
dc.contributor.authorLee, Dongheui
dc.contributor.authorYao, Angela
dc.contributor.authorZhou, Weiguo
dc.contributor.authorMei, Sijia
dc.contributor.authorLiu, Yunhui
dc.contributor.authorSpurr, Adrian
dc.contributor.authorIqbal, Umar
dc.contributor.authorMolchanov, Pavlo
dc.contributor.authorWeinzaepfel, Philippe
dc.contributor.authorBrégier, Romain
dc.contributor.authorRogez, Grégory
dc.contributor.authorLepetit, Vincent
dc.contributor.authorKim, Tae-Kyun
dc.date.accessioned2021-02-22T11:00:21Z-
dc.date.available2021-02-22T11:00:21Z-
dc.date.issued2020
dc.identifier.citationARMAGAN, A., GARCIA-HERNANDO, G., BAEK, S., HAMPALI, S., RAD, M., ZHANG, Z., XIE, S., CHEN, M., ZHANG, B., XIONG, F., YANG, X., CAO, Z., YUAN, J., REN, P., HUANG, W., SUN, H., HRÚZ, M., KANIS, J., KRŇOUL, Z., WAN, Q., LI, S., YANG, L., LEE, D., YAO, A., ZHOU, W., MEI, S., LIU, Y., SPURR, A., IQBAL, U., MOLCHANOV, P., WEINZAEPFEL, P., BRÉGIER, R., ROGEZ, G., LEPETIT, V., KIM, T. Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and Objects for 3D Hand Pose Estimation Under Hand-Object Interaction. In: Computer Vision - ECCV 2020, 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII. Cham: Springer, 2020. s. 85-101. ISBN 978-3-030-58591-4, ISSN 0302-9743.cs
dc.identifier.isbn978-3-030-58591-4
dc.identifier.issn0302-9743
dc.identifier.uri2-s2.0-85097407772
dc.identifier.urihttp://hdl.handle.net/11025/42724
dc.format17 s.cs
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.publisherSpringeren
dc.relation.ispartofseriesComputer Vision - ECCV 2020, 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIIIen
dc.rightsPlný text není přístupný.cs
dc.rights© Springeren
dc.titleMeasuring Generalisation to Unseen Viewpoints, Articulations, Shapes and Objects for 3D Hand Pose Estimation Under Hand-Object Interactionen
dc.typekonferenční příspěvekcs
dc.typeconferenceObjecten
dc.rights.accessclosedAccessen
dc.type.versionpublishedVersionen
dc.description.abstract-translatedWe study how well different types of approaches generalise in the task of 3D hand pose estimation under single hand scenarios and hand-object interaction. We show that the accuracy of state-of-the-art methods can drop, and that they fail mostly on poses absent from the training set. Unfortunately, since the space of hand poses is highly dimensional, it is inherently not feasible to cover the whole space densely, despite recent efforts in collecting large-scale training datasets. This sampling problem is even more severe when hands are interacting with objects and/or inputs are RGB rather than depth images, as RGB images also vary with lighting conditions and colors. To address these issues, we designed a public challenge (HANDS’19) to evaluate the abilities of current 3D hand pose estimators (HPEs) to interpolate and extrapolate the poses of a training set. More exactly, HANDS’19 is designed (a) to evaluate the influence of both depth and color modalities on 3D hand pose estimation, under the presence or absence of objects; (b) to assess the generalisation abilities w.r.t. four main axes: shapes, articulations, viewpoints, and objects; (c) to explore the use of a synthetic hand models to fill the gaps of current datasets. Through the challenge, the overall accuracy has dramatically improved over the baseline, especially on extrapolation tasks, from 27 mm to 13 mm mean joint error. Our analyses highlight the impacts of: Data pre-processing, ensemble approaches, the use of a parametric 3D hand model (MANO), and different HPE methods/backbones.en
dc.subject.translatedHand Pose Estiamtionen
dc.identifier.doi10.1007/978-3-030-58592-1_6
dc.type.statusPeer-revieweden
dc.identifier.obd43930809
dc.project.IDLO1506/PUNTIS - Podpora udržitelnosti centra NTIS - Nové technologie pro informační společnostcs
Vyskytuje se v kolekcích:Konferenční příspěvky / Conference papers (NTIS)
Konferenční příspěvky / Conference Papers (KKY)
OBD

Soubory připojené k záznamu:
Soubor VelikostFormát 
Armagan2020_Chapter_MeasuringGeneralisationToUnsee.pdf3,31 MBAdobe PDFZobrazit/otevřít  Vyžádat kopii


Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam: http://hdl.handle.net/11025/42724

Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.

hledání
navigace
  1. DSpace at University of West Bohemia
  2. Publikační činnost / Publications
  3. OBD