Link to the PDF: https://ssl4.eir-parts.net/doc/4388/tdnet/1751595/00.pdf
From AI Talk's news section (September 13th): 株式会社AI(エーアイ)
(Thanks to Fumito Fumizuki https://twitter.com/fumito_fumizuki for pointing this news out in the Discord.)
I translated the important parts:
AITalk is currently at version 4 (AITalk®4). The temporary name for the new engine is AITalk®5.
Because the vocal synth scene changed from mainly being used for robo calls and learning videos to now being used for smart products with interactive capabilities, they got a grant during July 2017 to December 2018 to develop a deep learning synthesis engine to solve issues with the old engine.
AITalk 4 uses "corpus-based speech synthesis": To meet the needs for interactivity, the voices require emotions like happiness, sadness, anger. Corpus-based speech synthesis needs both a phoneme dictionary and rhythm dictionary so it can figure out the accent. But it cost a lot and transitioning between emotions is not smooth.
The next gen AITalk 5 uses deep neural network speech synthesis. The sound quality will go up, the pronunciation will sound more natural/human, and it will naturally switch between emotions such as joy, sadness, anger. Instead of having an emotion dictionary like corpus-based speech synthesis had, the deep-learning one will have less files to record and therefore be less expensive.
The new engine will be available April 2020.
For those who don't know, products such as GynoidTalk and VOICEROID use AITalk.
From AI Talk's news section (September 13th): 株式会社AI(エーアイ)
(Thanks to Fumito Fumizuki https://twitter.com/fumito_fumizuki for pointing this news out in the Discord.)
I translated the important parts:
AITalk is currently at version 4 (AITalk®4). The temporary name for the new engine is AITalk®5.
Because the vocal synth scene changed from mainly being used for robo calls and learning videos to now being used for smart products with interactive capabilities, they got a grant during July 2017 to December 2018 to develop a deep learning synthesis engine to solve issues with the old engine.
AITalk 4 uses "corpus-based speech synthesis": To meet the needs for interactivity, the voices require emotions like happiness, sadness, anger. Corpus-based speech synthesis needs both a phoneme dictionary and rhythm dictionary so it can figure out the accent. But it cost a lot and transitioning between emotions is not smooth.
The next gen AITalk 5 uses deep neural network speech synthesis. The sound quality will go up, the pronunciation will sound more natural/human, and it will naturally switch between emotions such as joy, sadness, anger. Instead of having an emotion dictionary like corpus-based speech synthesis had, the deep-learning one will have less files to record and therefore be less expensive.
The new engine will be available April 2020.
For those who don't know, products such as GynoidTalk and VOICEROID use AITalk.