Full metadata record
DC FieldValueLanguage
dc.contributor.authorPřibil, Jiří
dc.contributor.authorPřibilová, Anna
dc.contributor.authorMatoušek, Jindřich
dc.date.accessioned2020-03-16T11:00:21Z-
dc.date.available2020-03-16T11:00:21Z-
dc.date.issued2019
dc.identifier.citationPŘIBIL, J., PŘIBILOVÁ, A., MATOUŠEK, J. Artefact Determination by GMM-Based Continuous Detection of Emotional Changes in Synthetic Speech. In: 2019 42nd International Conference on Telecommunications and Signal Processing (TSP). New York: IEEE, 2019. s. 45-48. ISBN 978-1-72811-864-2.en
dc.identifier.isbn978-1-72811-864-2
dc.identifier.uri2-s2.0-85071069572
dc.identifier.urihttp://hdl.handle.net/11025/36659
dc.description.abstractThe paper is focused on a description of a system for automatic detection of speech artefacts based on the Gaussian mixture model (GMM) classifier. The system enables to detect one or more artefacts in synthetic speech produced by a text-to-speech system. Our speech artefact detection uses continual GMM classification of emotional states in 2-D affective space of valence and arousal within the whole sentence and calculates the final change in the evaluated emotions. The detected shift to negative emotions indicates presence of an artefact in the analysed sentence. The basic experiments confirm functionality of the developed system producing results with sufficient correctness of artefact detection. These results are comparable to those attained by a standard listening test method. Additional investigations show relatively great influence of the number of mixtures, the number of used emotional classes, and types of speech features on the evaluated emotional shift.en
dc.format4 s.cs
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.publisherIEEEen
dc.relation.ispartofseries2019 42nd International Conference on Telecommunications and Signal Processing (TSP)en
dc.rightsPlný text není přístupný.cs
dc.rights© IEEEen
dc.titleArtefact Determination by GMM-Based Continuous Detection of Emotional Changes in Synthetic Speechen
dc.typekonferenční příspěvekcs
dc.typeconferenceObjecten
dc.rights.accessrestrictedAccessen
dc.type.versionpublishedVersionen
dc.subject.translatedGMM classificationen
dc.subject.translatedstatistical analysisen
dc.subject.translatedsynthetic speech evaluationen
dc.subject.translatedtext-to-speech systemen
dc.identifier.doi10.1109/TSP.2019.8768826
dc.type.statusPeer-revieweden
dc.identifier.document-number493442800010
dc.identifier.obd43927316
dc.project.IDGA19-19324S/Plně trénovatelná syntéza české řeči z textu s využitím hlubokých neuronových sítícs
Appears in Collections:Konferenční příspěvky / Conference Papers (KKY)
OBD

Files in This Item:
File SizeFormat 
TSP_2019_Pribil_ArtefactDetermination.pdf189,85 kBAdobe PDFView/Open    Request a copy


Please use this identifier to cite or link to this item: http://hdl.handle.net/11025/36659

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

search
navigation
  1. DSpace at University of West Bohemia
  2. Publikační činnost / Publications
  3. OBD