• We're currently having issues with our e-mail system. Anything requiring e-mail validation (2FA, forgotten passwords, etc.) requires to be changed manually at the moment. Please reach out via the Contact Us form if you require any assistance.
lynnquote
Reaction score
45

Joined
Last seen

Profile posts Latest activity Postings About

  • Why do people expect Vocaloid\UTAU voice to sound the same once it's been remade into AI?
    It's already been made clear that the voice providers sound different than the old synthesizers.
    Why put in such an expectation? Vflower's voice is clearly machine-sounding and all of it's traits are due to the compressed nature of the samples + the Vocaloid3/4 engine warping them.
    Of course the VP can't sound the same. I see people make remarks about UTAU Ritsu sounding better than the AI Ritsus when it's clear that the UTAU VB samples were cherrypicked to sound consistently shout-y to a non-human level. Nobody sings like that. Canon doesn't scream when she sings, it'd strain the voice too much. Of course the UTAU is going to always sound powerful when the samples were made and picked to sound that way.
    I think it's unreasonable that people think the AI is simply going to be "automatic tuning generator" when the AI is actually a recreation of the voice provider's singing.
    Training an AI voice directly on top of the UTAU rendered voice is going to make it sound low-quality due to the double synthesizing.
    I get that we all have nostalgia for the old chunky sound of a previous generation synth, but AI is more welcoming to producers who don't want to deal with all the manual tuning. It's simply easier to use and sounds better by default.
    And we can still keep our copies of the old previous gen synth.
    lynnquote
    lynnquote
    AddictiveCUL, are you seriously trying to imply that people who use SynthesizerV are not artists? What? You do realize tons of early Vocaloid producers didn't even tune their vocals and just left the raw render on top of the instrumental, right? What's so different about this? All I'm trying to say is that AI creates a quicker result with less effort, and if you still want to put more effort in, you can do that. You can achieve style with SynthV voices, too. I'm genuinely confused.
    If the AI version "shouldn't" sound realistic, then what is the point of upgrading the synthesizer? Why even develop a new renderer? What do you expect AI to be? This isn't genAI we're talking about, it's SynthesizerV and it's equivalents. They're still as much of an instrument as Vocaloid is.
    You just seem to be very against SynthV for no reason whatsoever. Is this a cause of brand loyalty/nostalgia?
    In the end, we still have to make the background track ourselves, so why force the musician to add more lengthy processes to their work? Isn't the fact they made a whole song already artistic enough? This isn't about cover tuners, you know.
    Chuchu
    Chuchu
    People got used to how characters sound after so many years and how the engine noise affects the voice which becomes part of their brand. Crypton got so much shit from the Japanese side about how different Miku Append sounded from default Miku V2 and many Vocalo-p even boycotted Miku Append for a short while until more songs showed the voicebanks potential. Most just want more ease of use while retaining the unique sound the vocal character is associated with. It's nothing new that people will throw a hissy fit if an established and beloved character's image and voice is drastically changed into something unrecognizable.
    IO+
    IO+
    I make peace with it, don't like to talk about this subject too much because, peoples are different so their opinion.

    I have utau vs synth v cover the same song and i give them the same attention to detail and both are sound good.

    But AI is not just AI though they are different, let say most homemade utau to AI is just using nnsvs or rvc and just slap some performance that rip from someone else without permission. SynthV have their own method, Yamaha have their own AI, neutrino is a little bit different but at the end it sound just as good as paid editor like synthV

    At the end of the day like 80-90% of listener only care about results. If the song good, it's good.
    If you look at vocaloid in 2005-2010 most of the song that use un-tune vocaloid and some of those song are one of the most iconic song.

    Peace!
    I went on NicoNico and searched up "MEIKO_ENGLISH", there's surprisingly plenty of content. It is all old videos, but it's a lot more than what you can find on YouTube.
    Didn't have much luck with searching "Luka English" though.
    I want to discover more work featuring classic English voice DBs. I don't mind covers, I just want to listen to the unique machine noise of a pre-AI Vocaloid.
    I must say though, AI voicebanks handle bass sound and lower notes much better.
    3 days until Shiki Rowen's VOCALOID6 releases. It's almost like counting to doomsday.
    Furry UTAUs be like: male blue dog, male blue dog 2, male wolf (visually identical), male GRAY dog, male cat. All of them are baritone except for Male Blue Dog 2 which has a a4 pitch. They will be forced to sing Miku contralto songs.
    I wish YouTube's search function had some sort of "videos you haven't seen" option. By default searching for videos prioritizes the popular videos that everyone has seen at this point (and a bunch of unrelated stuff, too!) , but what if I want to find new songs or covers with a specific vocalsynth or something like that? I guess I can set the search to sort by recently uploaded, though.
    I wonder caused my sudden return to a vocalsynth phase, I guess it never really went away!
    But finding and using these synths on a Linux machine seems difficult. OpenUTAU has a Linux version that I had to manually find out how to even start it up...
    It works well, so it's fine, I made Kasane Teto sing VOC@LOID IN LOVE.
    I should learn how to make music, sometime.
    (First post, hope I'm doing this right!)
  • Loading…
  • Loading…
  • Loading…