I was wondering what approaches people have taken to humanization with vocal synths--i.e., varying the timing of MIDI notes so they aren't always right on the grid.
Various DAWs have a "humanize" function that you can use on MIDI notes within the DAW, but for something like Piapro or other VSTs, I would assume that that wouldn't work since the notes are outside the DAW. I haven't found a "humanize" or similarly-named function in the Piapro or Vocaloid 5 manuals, either...though they could be called something I didn't think of.
Is the only approach to edit individual notes manually, either so they're not quite on the grid or so that their pronunciation occurs earlier/later?
Various DAWs have a "humanize" function that you can use on MIDI notes within the DAW, but for something like Piapro or other VSTs, I would assume that that wouldn't work since the notes are outside the DAW. I haven't found a "humanize" or similarly-named function in the Piapro or Vocaloid 5 manuals, either...though they could be called something I didn't think of.
Is the only approach to edit individual notes manually, either so they're not quite on the grid or so that their pronunciation occurs earlier/later?