Improving Naturalness and Intelligibility of Text-to-Speech Systems

The latest generation of speech synthesis techniques has recently increased the quality of Text-To-Speech (TTS) systems as regarding the naturalness and the intelligibility of the voice. Such systems are often referred in the literature as corpus-based TTS. The unit-selection method is one of the technologies for corpus-based TTS that accumulates human speech (natural speech) in a database of reusable units, and generates synthesized speech by properly concatenating the units. Considering all the possible techniques for corpus-based TTS systems, the unit selection methods are the ones most sensitive to the dimension and the quality of the speech corpus because the algorithms and the databases are determined by a statistical approach based on a large-scale speech corpus. During database creation, each recorded utterance is segmented into some or all of the following: individual phones, diphones, half-phones, syllables, morphemes, words, phrases, and sentences.

The speech segmentation phase plays a crucial role for the naturalness of the unit selection voice. In fact, speech segmentation defines the acoustic unit boundaries, which divides up sounds into the audio files that the speech corpus contains. An inaccurate alignment of the corpus produces non-complete acoustic samples or marred by the presence of sounds belonging to neighboring units, degrading the intelligibility and naturalness of the system output.

The study in "Statistical Context-Dependent Units Boundary Correction for Corpus-Based TTS" by Dr Claudio Zito in collaboration with the University of Pisa ,the Institute of Cognitive Science and Technologies led by Dr Piero Cosi, and Loquendo proposes a statistical method based on regression trees to improve the naturalness and the intelligibility of unit-selection TTS systems.

In this study, we present an innovative technique for speaker adaptation in order to improve

the accuracy of segmentation with application to unit-selection Text-To-Speech (TTS) systems. Unlike conventional techniques for speaker adaptation, which attempt to improve the accuracy of the segmentation using acoustic models that are more robust in the face of the speaker's characteristics, we aim to use only context dependent characteristics extrapolated with linguistic analysis techniques. In simple terms, we use the intuitive idea that context dependent information is tightly correlated with the related acoustic waveform.

We propose a statistical model, which predicts correcting values to reduce the systematic error produced by a state-of-the-art Hidden Markov Model (HMM) based speech segmentation.

From the training set a duplex forced alignment is extracted for each utterance. Both alignments are preprocessed in order to cope with the recognition errors and to achieve a unique, reliable transcription of the units. Then, from this transcription context-dependent information are extracted to build the relative senone. Each senone is then labeled with the systematic error computed by our error function. The software wagon gets as input the feature data (dataset) and the feature description (record description) and builds the regression tree.
Graphical representation of the steps required to build a CART

We propose a statistical model, which predicts correcting values to reduce the systematic error produced by a state-of-the-art Hidden Markov Model (HMM) based speech segmentation.

In other words, we can predict how HMM-based Automatic Speech Recognition (ASR) systems interpret the waveform signal determining the systematic error in different contextual scenarios.

Our approach consists of two phases:

(1) identifying context-dependent phonetic unit classes (for instance, the class which identifies vowels as being the nucleus of monosyllabic words); and

(2) building a regression model that associates the mean error value made by the ASR during the segmentation of a single speaker corpus to each class.

The success of the approach is evaluated by comparing the corrected boundaries of units and the state-of-the-art HHM segmentation against a reference alignment, which is supposed to be the optimal solution. The results of this study show that the context dependent correction of units' boundaries has a positive influence on the forced alignment, especially when the misinterpretation of the phone is driven by acoustic properties linked to the speaker's phonetic characteristics. In conclusion, our work supplies a first analysis of a model sensitive to speaker-dependent characteristics, robust to defective and noisy information, and a very simple implementation which could be utilized as an alternative to either more expensive speaker-adaptation systems or of numerous manual correction sessions.

Simple example of the two classes. It shows how the reference transcription (up) and the sphinx transcription (down) for the same (piece of) utterance were compared phoneme by phoneme. It is possible to notice a recognition error for the phonemes /t/ and /sil/. The other cases belong to the position error class. The phonemes /i1/ and /u1/ identify the stressed vowels.
Simple alignment example

The results of this study showed that the context-dependent units boundary correction has a positive influence on the forced alignment, specifically when the misinterpretation of the phone is driven by acoustic properties linked to the speaker's phonetic characteristics. The regression model proposed in this study attempts to maximize the performance evaluated with Root Mean Square Error (RMSE) and the correlation between the desired signal (which models the trend of the systematic error for each context-dependent phoneme) and the predicted signal. We obtained the best results with RMSE values equal to 0.040 and correlation of 0.444. We tested the model on around 3000 phonetic units and we obtained a reduction of around 12% of the total error with respect to the reference alignment, with positive performance also in the single context-dependent classes. However, the results showed that the performance of our system (in terms of percentage of error correction) is strongly limited to those phonemes for which the ASR computes an accurate segmentation. In fact, when the errors are shorter than one hundredth of second and with an arbitrary distribution, they cannot easily be modeled by our approach.

#speechsynthesis #corpus #loquendo #segmentation #natural #intellegibility #tts #ars #AI #ML #statistics #regression #tree #hmm #hiddenmarkovmodel

Featured Posts
Recent Posts