I imagine a rap VB would be very similar to any other genre-specialised voicebank, where using it within its intended parameters yields fantastic results, but anything outside of it is... less-than-stellar, to say the least.
I’m hoping the Emotion parameter becomes a staple and is continuously built upon, because the ability to pull from different parts of a dataset to fit the tonal and emotional requirements of a song would be quite incredible. Of course it would require them to record data for 2, 3, or however many emotions they want, but i imagine the payoff would be huge. I imagine it’d help massively with the tonal variety of rap too.