• We're currently having issues with our e-mail system. Anything requiring e-mail validation (2FA, forgotten passwords, etc.) requires to be changed manually at the moment. Please reach out via the Contact Us form if you require any assistance.

Other Cryptonloid voicebank updates, collabs, & concert news (crypton_wat Twitter translations)

uncreepy

👵Escaped from the retirement home
Apr 9, 2018
1,618
Thank you for sharing this information with us. I didn't realize that Magical Mirai had presentation segments, I guess I just imagined people went there for the concert and maybe there was merch and displays to look at.

I took a closer look at the schedule to figure out what exactly is going on that day (the 31st).
2246


Masataka Goto (the important guy from AIST who developed VocaListener) will be presenting with Wat from 2:30 ~ 4:00.

The stuff Goto has been working on lately is Songle Sync for the birthday message project, TextAlive, and a music service called Kiite.
It says on the page with a reference mark (※) that at 3:09, the Hatsune Miku birthday message using Songle Sync will happen during their presentation.

For Wat, it says he will "announce new plans and the singing/speaking technology currently under development by Crypton Future Media". Because it says "currently under development", that leads me to believe that this could just be a demonstration to get everyone on the same page about Cherry Pie (I assume a lot of people don't know about it who don't follow Wat on Twitter). Also, because it's "currently under development", it means it's not a sale announcement because it's not finished yet. Well, I mean, they could say when it might be planned to be finished "soon", but it definitely won't be for sale on the 31st.

I'm wondering if the segment from 4:30 ~ 5:00 about using iZotope (probably Nectar 3) will be related to Wat's presentation. There's enough of a time gap after it ends (30 minutes) for people to take breaks and then head on over to watch the iZotope demo.

Edit: Maybe because it says the technology is for singing/speaking, that Cherry Pie is expected to be used for both? And that Exemplar will be happy that Crypton isn't working with Vocaloid anymore? Hopefully we find out if there will be a V4 Append, a V5, or no Vocaloid at all during this exhibit so we can all sleep at night.

Edit x2: Because we know about Cherry Pie, what if the "big and unexpected announcement" is not tech-related? (As in the Olympics or something?)
 
Last edited:

MagicalMiku

♡Miku♡
Apr 13, 2018
1,319
Sapporo
you're welcome ^^
well yes, during the first Magical Mirai events, there weren't such session talks, but recently (I think in the last 3 years if I'm not mistaken) they've added these session panels both in Magical Mirai and in Snow Miku events. they are nice because you can see in person the people in charge of many projects, that is really different compared to watching online a session talk on NicoNico ^^

mm.. and yes, "currently under development", so maybe whatever projects they are, maybe not ready for an early release (late september), maybe later in the year or early 2020, but that's ok, they can take how much time they need ^u^
 

Wario94

Passionate Fan
Jan 5, 2019
207
29
Even though I don't have enough money to go to Japan just to go and see Magical Mirai, is it possible if there would be a video about that new technology for Vocal Drive, Cherry Pie, and Beef Jerky?
 

uncreepy

👵Escaped from the retirement home
Apr 9, 2018
1,618
I don't think they'll stream it while it's happening, but I assume they will update everyone via Twitter after the event? Depends on what the news is, I guess. If it's something like "Miku will see you at the Olympics", it would be quick/easy to let people know (unless they have a secret promo video). If it's something like new software images and stuff, they will probably share those on Twitter or one of their Miku-related official blogs. Hopefully fans will tell other people after they find out so that we don't have to wait until the next day or something from an official statement.
 
  • Like
Reactions: Wario94

MagicalMiku

♡Miku♡
Apr 13, 2018
1,319
Sapporo
Even though I don't have enough money to go to Japan just to go and see Magical Mirai, is it possible if there would be a video about that new technology for Vocal Drive, Cherry Pie, and Beef Jerky?
is not possible to take pictures or take videos of the talk sessions panels without permissions, but like uncreepy said, and like it happened for the previous Magical Mirai and Snow Miku events, any important announcement will be shared later on the blog or on Twitter ^u^
last year, for example, on Twitter the fans reported that during the panel with Wataru Sasaki and Makoto Osaki (producer of Project DIVA Arcade), there were hints about DLC for Snow Miku and a future Project DIVA ^^

anyway I'll be attending both panels (SEGA panel and wat panel) ^-^
the number of seats is limited, but is possible to stand and see/listen anyway :)

this is a session panel of this year Magical Mirai in Osaka ^u^

 
Last edited:

uncreepy

👵Escaped from the retirement home
Apr 9, 2018
1,618
Because Voidol (click here for my Voidol thread) released today, I feel inspired to recap what we know about Cherry Pie. I realized that I haven't communicated very clearly exactly what type of synthesis Cherry Pie uses and I think it's coming across as something more original that it technically is. (Well, it solves problems in an original way, but it's not 100% brand new science.)

What is a vocoder?
When I tried out Voidol, I thought that even though I switched character voices, it still sounded a little like me. I remembered that when the Shinkalion anime came out and Miku had a speaking role in it, that Wat said Saki Fujita (Miku's voice provider) spoke the lines and that Cherry Pie used it as the base, that her voice was coated in Miku's. I believe that because Wat kept referring to the technique as "voice coating", technically called a "vocoder", that the input voice remains as some sort of reference point to be coated. When the Cherry Pie demo came out, some people thought that it wasn't Miku's voice changing the singers' voices, they thought it was just getting the pitch changed:


If you listen to the timestamped part (the voice converter part), Miku's voice coats the two singers, resulting in perfect-sounding English. (Pay attention to the bottom of the Cherry Pie pop-up, the "NN Voice Converter" (Neural Network Voice Converter) lets you select from the character voice, in this case CV01 = Miku.)

If you go earlier in the video, to 0:26, it is true that the original voice can also be edited with things like pitch and gender. However, the "NN Voice Converter" section on the bottom is gone, it's not being used at all. It seems like Cherry Pie can be used to edit voices however you like OR you can coat them to become Cryptonloids.

A vocoder analyzes and synthesis the voice signal for things such as voice transformation. It takes the human voice and passes it through a filter, which sends it through envelopes (I drew it earlier, but if you imagine a sound wave sitting between 2 slices of bread, it's the bread touching the tops/bottoms of the sound wave), then it gets ready for re-synthesis.

An example of a vocoder that many people are familiar with is auto-tune. When you input a voice and it makes it sound robotic. Other examples are robot/alien voices in movies. Here is a good explanation with many quick demonstrations of types of vocoders:


I also believe that they called it an "effector" for a reason. Cherry Pie adds a cool effect to the original voice.

What is HMM-based synthesis?:
Cherry Pie uses deep learning (aka deep neural network / machine learning), where you give the computer lots of examples so it can learn rules on how to use the materials and recreate it without making a horrific abomination. (Ex: The computer can identify animals, an art software that removes cars from a video, art software that applies the painting style of a master painter to your photo.) Wat referred to this as "patterns".

Voidol and the still unreleased new version of CeVIO also use deep learning. Both the current CeVIO and Voidol also use HMM, just like Cherry Pie.

HMM is a common thing to use in speech synthesis. An example that might be easier for people to understand is with Amazon Alexa. HMM means "Hidden Markov Model", in voice recognition, it analyzes the waveform and compares it to a dictionary (aka what the neural network learned so far) to figure out what's being said. HMM tries to figure out what was said by breaking up the speech into units of data to find the phoneme (the smallest unit in speech) and uses statistics to deal with it. The computer has rules coded in to it and a dictionary of vocabulary. Alexa is a bit different in that it tries to use the phonemes to understand what word you said in order to help communicate with you. I don't think the music-related synthesizers need to do this, but I do believe phonemes matter so that it can properly coat/effect them or recreate the sounds in order to replace what you said with (for example) Miku's phonemes.

Based on how choppy Voidol sounds and some of the older clips of Miku speaking and singing, it must be choppy due to the character's phoneme sound files trying to match the human voice. Or maybe the examples and rules the neural network has is not quite enough for something realistic.

However, I found some other examples of text to speech HMM that are like Cherry Pie.

In this video, the woman draws on a tablet along the lines to change the voice (that is narrating "Alice In Wonderland") in real time. If you pay attention to the upper right corner, it shows what is being changed. Sound familiar (almost like the 0:26 part of the Cherry Pie demo)?

Here is another example that is not being affected:

This makes me wonder how realistic Cherry Pie can sound (referring to choppiness).

If there are already vocoders (like auto-tune) and HMM-based synthesizers, what can Cherry Pie do differently?
I believe that the point in Cherry Pie is to be a one-stop shop for editing the input (human) voice. The effectors (such as Vocal Drive and Cherry Pie) can be used on audio that has already been recorded (like the man and woman singing in the Cherry Pie demo) to change things like pitch, growliness, and robot-ey-ness. You can also change the singing to become a new person completely (as in a Cryptonloid) with the vocoder (voice coating) options.

The singing quality is supposed to be further tweaked with Nectar.

The main goal of Cherry Pie was originally to save time/effort on tuning the Cryptonloids. The trend of VTubers started recently, after they started to make the "Appends" for the Cryptonloids. During this time, Crypton also started to get on the hype train for real-time motion capture with 3D models. VTuber popularity exploded, and Crypton realized that there was a trend in people having 3D anime avatars that weren't even the same gender as them, but the problem was that their voice doesn't match the character's. So Cherry Pie is ultimately supposed to be a real-time voice converter that can be used to change your voice while livestreaming so your anime body matches a new Cryptonloid-based voice. I assume they would like to use this for concerts as well, in order to give personalized messages or to speak foreign languages more convincingly (by coating a clip of a native speaking with Miku's, for example).

To summarize, Cherry Pie is useful for people creating songs that need the vocals edited in cool ways (ex: robot voice), for people wanting to sing and change their voice into a Cryptonloid, and it is also for changing your voice in real time for speaking or singing.
 
Last edited:

uncreepy

👵Escaped from the retirement home
Apr 9, 2018
1,618
I don't really know what Wat's talking about, but it seems like at 1 PM on the 30th at Magical Mirai in Tokyo, during Sega's "stage" (looking at the schedule, I assume he's referring to the 1 PM - 2 PM "SEGA ft. Hatsune Miku Project 10 year stage ~There is also the latest news~" segment), Wat and some producers participated in a talk and that a part of it is tied to the "big and unexpected" announcement on the 31st. He says that the contents can probably be found online and at Magical Mirai.

The tweet was from last night in my time zone on the 29th, the comments on the tweet are only two from 9 hours ago. It seems weird that it's now 9 PM in Japan, long after the 1 PM Sega presentation, but no one seems to have talked about it. The concert part of Magical Mirai ends at about 9:30 PM, I think, so maybe people who saw the Sega stuff were too busy with Magical Mirai stuff to comment on it and haven't even gotten back to their hotels and stuff?

Unless I'm confused about the timeline or some of the Sega-related stuff has already been revealed (like Project Sekai) on the other VocaVerse forum posts.
 
  • Like
Reactions: Jikyusan

MagicalMiku

♡Miku♡
Apr 13, 2018
1,319
Sapporo
Wataru Sasaki was talking about Project Sekai announcement, since is a collaboration project between Crypton and SEGA ^-^
(and my guess, probably that tomorrow announcement might be related to the right girl of the teaser, maybe they'll use some kind of tech for her voice and animation)

Project DIVA doesn't have the Crypton logo (only PiaPro logo), but Project Sekai has Crypton logo next to SEGA logo, that means they are providing some tech
 
Last edited:

MagicalMiku

♡Miku♡
Apr 13, 2018
1,319
Sapporo
because Project Mirai had the feature of changing the voice of some songs depending on which Vocaloid you selected ^^

edit: to be clear, there is Crypton logo also on the Project DIVA back covers, but this time they've used the logo to show that they're helping with the development
 
Last edited:
  • Like
Reactions: Wario94

Wario94

Passionate Fan
Jan 5, 2019
207
29
because Project Mirai had the feature of changing the voice of some songs depending on which Vocaloid you selected ^^

edit: to be clear, there is Crypton logo also on the Project DIVA back covers, but this time they've used the logo to show that they're helping with the development
That's good!
 

mobius017

Aspiring ∞ Creator
Apr 8, 2018
1,992
(and my guess, probably that tomorrow announcement might be related to the right girl of the teaser, maybe they'll use some kind of tech for her voice and animation)
I could be completely off-base here, but does anyone else think that girl looks like IA?
 

Ceres

「会いたかった」
May 13, 2018
68
I could be completely off-base here, but does anyone else think that girl looks like IA?
Hair lacks waaaaay too much -poof- to look like IA :P (And her braids)
My guess it's the producer/player of the game, it would be unexpected if it's someone new that's not 'generic'... But it would be interesting how it's handled
If anything she reminds me of Yukina from Bandori, which could be possible since it's Craft Egg but I doubt it since they would use Kasumi for something like this, as usual.
 

uncreepy

👵Escaped from the retirement home
Apr 9, 2018
1,618

[About today's technology announcement]

I will start the announcement at 14:30 (2:30 PM). It is planned to end at 16:00 (4 PM).

Mr. AIST-san's (Masataka Goto of the National Institute of Advanced Industrial Science and Technology / 産業技術総合研究所) topics include a talk about 3 technologies.

About the #1 topic, the detailed information going public will be at the end of the year.

There is also a short and clear blog post planned.

Yoroshiku onegaishimasu (please treat me well/work together with me).
So Goto is probably talking about Songle Sync/Miku's birthday message project, TextAlive, and/or the music service called Kiite.

Maybe Labopton BLOG will update? I guess this is just a teaser until the end of the year, maybe they will release the software then?
 

uncreepy

👵Escaped from the retirement home
Apr 9, 2018
1,618
Eji and Amano both made tweet threads about what's happening that includes pictures of the slides (click the tweet and scroll through the pictures):



Here is a really messy translation compilation for now of the most interesting stuff from the threads. Sorry that it just looks like insane scribbling, I will do a better translation based on the blog post when it comes out, the slides are too blurry and I don't to have to spend too much time reading them.

Goto talks about Kiite (made my Crypton and AIST).

Wat showed a slide that says they updated all the Cryptonloids' databases (called voice banks in English) for things like pronunciation balance (consonants and vowels), including Miku Chinese (the update for Chinese is supposed to be released this year). They used Miku as the guinea pig and then updated the other singers to match.

The effectors they revealed are: "VocalDrive, HS Booster, Drifter, Shifter, Cherry Pie, FSM, and more..." (there are supposed to be 11 effector VSTs).

HS Booster = Harmonic Structure Booster (effects things like breathiness or stretch the voice)
Vocal Drive = Husky or growl voice
Drifter = Change the formants (gender factor)
FSM (Frequency Synchro Multi-Effector) = ?

It took 2 years to make the voice changer (vocoder) with AIST. It's aimed at VTubers.

They won't use V5. You use them in NEXT Generation Piapro Studio (contains semi-automatic tuning). It said "singing software new series planned for sale at the first half of 2020" (could be anywhere from January - June, Japan breaks the year into 2 halves).

The new setup is: CVVC > Piapro Studio Editor > Voice Effector > output

People who bought stuff will probably get a discount.

I will be the first one to offer @Exemplar a pat on the back. I guess Vocaloid is dead, you can throw the first bunch of dirt on top of its coffin.

Edit: Remember that these tweets were written frantically during the presentation by Eji and Amano. They tried to rephrase what they heard Wat say, but please don't take these as official statements from Crypton. Amano said that this isn't necesarily a separation from Yamaha, that Crypton and Yamaha respect each other and are in talks with them. Remember that Crypton and their characters got popular through the Vocaloid engine, so don't rag on it too hard.
 
Last edited:

uncreepy

👵Escaped from the retirement home
Apr 9, 2018
1,618

Labopton Blog update / At the new technology presentation at "Magical Mirai 2019", we made an announcement about our company's currently under development new Hatsune Miku database (voicebank(s)) and the associated software!
Blog post link

2258
^ The disclaimer at the bottom says they are currently developing it, so the image might not reflect the final product.
It looks similar to the standalone version of Piapro that came with Miku Chinese (the parameters being blueish).

The details will be announced at the end of 2019. And the release is planned for the first half of 2020, so please wait!!
※ For people owning Hatsune Miku V4X and other virtual singers by our company, there will be an announcement at the end of the year for an exclusive sale.
Geez, what a vague blog post. I thought they would recap the presentation in the post. Hopefully they do that sooner than the end of the year. Glad Amano and Eji took photos of the presentation for us.

Edit: Wondering if we can use non-Crypton Vocaloids in the new Piapro or not.
 
Last edited:

Users Who Are Viewing This Thread (Users: 0, Guests: 1)