Introduction:
Ever since I started trying to make music with vocal synths, I've wished there were a tutorial that covered everything--the entire song making process from start to finish. With a little more experience behind me than I had before, I'm going to try to make a partial attempt at that here.
For this purpose, I'm going to describe everything I did to produce my cover of Harry Dacre's "Daisy Bell."
Since this is a cover, this description unfortunately won't cover anything regarding music theory or the creative process. But it may be helpful for people new to the hobby who want to know how to glue the different pieces of software together/how the actual production process works, at least from the perspective of someone somewhat less of a newbie than they are.
Finding Sheet Music and Checking Copyright:
After deciding to cover "Daisy Bell," the first thing I did was look online for its sheet music. I found some on Muse Score, but there are other places to look online, too. For instance, @parallax_fifths suggested the Petrucci Music Library, which offers a lot of public domain music.
That public domain bit was important to me, because covering songs can get into difficult legal considerations. Working with songs that are in the public domain helps avoid those problems. Fortunately, "Daisy Bell" is old enough that it's in the public domain, at least in the United States.
Making the MIDI:
With the sheet music in hand, I was able to transcribe it into a MIDI. I went very simple for this--just set the time signature in my DAW (Studio One), created instrument tracks that used PreSonus's Studio Grand piano (which seems to come with most/all copies of Studio One), and started copying music notes.
I made a track for each staff in the original score--a treble staff for the vocal melody, and a pair of treble/bass staves for the instrumental.
"Daisy Bell" is very convenient in that it consistently uses the same notes for the verses and choruses, respectively. So once a verse is laid out, you can copy it and use it over again for each verse. You can do the same thing for the choruses, too.
Tip 1:
I use Studio One's Duplicate Shared (Shift+D) feature for this work. To use it, click the region with the notes you want to duplicate and press Shift+D. The whole region will be duplicated, and any changes you make to either region will also occur in the other. The caveat to this approach is that if you do anything like humanization (a DAW function that slightly randomizes the note start times and velocities so they sound more like the notes were performed by a human), those changes will be carried over, too. So if you compared two verses side by side, for example, they'd sound identical, which wouldn't be very human. But once you're ready to humanize, you can always unlink the copied regions and humanize them separately.
Vocals:
To get the vocals started, I created another instrumental track that used Piapro Studio, then exported the MIDI from Studio One and imported it into Piapro. As a starting point, I simply typed in the English lyrics from the music score.
Tip 2:
It's worth noting that I made the MIDI in Studio One and imported it into Piapro. I had a bad experience in the past making the MIDI in Piapro and moving it into the DAW--for some reason, it made it sound like the attack of each note was really high, although the velocity was at the default. Not knowing where the issue was coming from, I couldn't really remove it. So now, I make my MIDIs in Studio One instead.
Tip 3:
After exporting the MIDI from Studio One, I could've deleted the track representing the vocal staff. However, I just muted it. I find it really helpful to be able to see the vocal melody while working with the instrumentals, and you can't see the notes inside Piapro while in Studio One. It's a bit of a pain to keep the notes in Piapro/Studio One in sync if you change the vocal melody at some point, but I find the inconvenience worth it.
Phoneme Replacement:
For good or ill, I tend to work on everything a little at a time rather than handling each area one at a time, so I did a first round of phoneme replacement at this point, although I think it might more traditionally be handled later. (Personally, I like feeling a little more comfortable with how the vocals sound--that way there's less distraction while working with the instrumentals.) Phoneme replacement is a technique where you manually replace the default phonemes chosen by your vocal synth with others that you think sound better. Working with Miku V4 English for the cover, mostly what I focused on were her vowels, especially her use of "i." I don't mind Miku's accent, but I tried to make her English for this cover as natural as I thought I reasonably could, and her "i"s are one of the parts of her English that are most distinctively pronounced differently from how a native would. There's lots of good information about phoneme replacement on the Vocaloid Wiki English Phonetics page, along with a really helpful chart of which vowel phonemes are similar to each other.
I would continue with a few more rounds of phoneme replacement later as I worked, gradually identifying pronunciation items I wanted to adjust and addressing them.
Instrumentals:
The instrumental sounded quite good with the Studio Grand, and it looked to me like the score I found was written with just a single piano in mind. However, I wanted to try out some other instruments. I decided to switch the treble staff over to a harpsichord, since I think its sound is linked in many people's minds with this sort of music from the 1700s and 1800s. Also, it has a higher-pitched sound, so it suits the treble part. Similarly, I switched the bass staff over to another grand piano library to see how it sounded.
At this point, I tried to do a few things related to the instrumental that I'd skipped earlier, too. Namely, I added some crescendos/descrescendos from the score in the form of gradual velocity increases/decreases, as well as some accents in the form of raised velocities on single notes. There were also some notes about gradually decreasing tempo, which I tried putting into Studio One's tempo track. However, once I added effects onto the Piapro track in Studio One, there seemed to be some kind of latency in Piapro getting the tempo changes synced to it, which made Miku skip words after the tempo changes. Since I couldn't really hear the difference with/without the tempo changes, and because I didn't want to remove the effects, I just removed the tempo changes.
I could have applied humanization to the instrumental (once I was sure the MIDI for the instrumental was the way I wanted it otherwise--it would've made changing the notes harder/more confusing to humanize before other adjustments were finished). However, when I tried it, I thought it made the harpsichord sound weird for some reason--afterwards, it sounded thinner and buzzier, and (understandably) the chords sounded weaker, since their notes weren't all getting hit at the same time. After trying it out, I left the instrumentals unhumanized, since they sounded better.
Effects/Mixing:
Originally, I thought you could get any sound/tone by changing a synth's built-in parameters. But now I think that the parameters are really used for fine-tuning. For the largest changes (like getting a synth's voice to go lower/higher than its optimal range, or maybe to apply other distortion more easily), it works better to use some kind of plugin/effect (something like Nectar/Neutron) and/or to use EQ. Once the effects have made the biggest changes in a broad way that doesn't make the voice incomprehensible, you then use the parameters to make little tweaks (on a per-note basis, if needed) to get things just right.
I like iZotope's Nectar for vocal work, since it comes with lots of effects that can be applied, and its Vocal Assistant can use machine learning to apply some of these for you and give you a good-sounding starting point. In versions higher than Elements, you can then go into the various effects it applies and change the settings to suit your preference, as well as add/remove effects or change their order. I used Vocal Assistant's Vintage preset on Miku, since it promised to enhance her mid-range frequencies and give her a warmer tone. I also put on a Nectar effect not suggested by Vocal Assistant: saturation. Saturation (like the tape saturation often discussed in various places online) relies on the idea that a "sound" (like a voice or instrument) is really a cluster of pure tones (a.k.a. single frequency sounds) heard together. Saturation fills in some of the frequencies between these component pure tones with tones at frequencies that are multiples of those pure tones--in other words, harmonics. By filling in those spaces, saturation makes the original sound sound richer.
Tip 4:
To learn more about mixing and some of the science behind sound, I emphatically recommend "The Art of Mixing." It explains what mixing is and at least one philosophy underlying it, what all the different kinds of mixing tools (like EQ, high/low-pass filters, reverb) do and basically how to use them, and some genre-specific mixing conventions. I really think everyone who is trying to make digital music ought to watch it, since it helps you better understand the theory behind what you're doing when using effects and things.
Similar to the vocals, I applied iZotope's Neutron to the harpsichord and grand piano and used a Vintage preset.
I made some broad adjustments to the instruments'/vocals' volume with Studio One's volume sliders, but to handle the rest, I put iZotope's Visual Mixer on the master bus. Visual Mixer communicates with the other iZotope plugins and controls their volumes remotely as you drag the items higher (louder) or lower (quieter). You can also apply panning by dragging the items left/right (although I didn't feel it would be beneficial with so few instruments), as well as fattening by making the items' icons in Visual Mixer wider. (Fattening basically pans an item to the left and right simultaneously so it sounds like it's filling the auditory space more. I did apply fattening to everything.)
Tuning/Special Effects:
I generally like how Miku V4 English sounds by default, but I've wanted to grow into tuning for awhile. So this was my first tuning experience. I basically read through the parameter descriptions on Vocaloid Wiki and considered how each might apply to the song in various places. It helped that I was quite familiar with the song myself by this point, so I could notice how the parameters might change while I was singing the song at various points. It might get tedious for everyone to try to describe everything from end to end, but briefly, I did things like:
To be sure everything sounded consistent, I created a bus for Piapro and the audio track to share and moved Miku's Nectar plugin over to it. That way, Nectar would apply its effects to both.
Mastering:
With all of this work, the song itself could be considered to be "done." The final step is to make small adjustments to your song so that it fits the acoustic norms of its genre, if desired, and also so that it will sound good on the medium to which you're publishing. This is the job of mastering.
Tip 5:
As implied above, each platform you might publish to has different requirements for maximum volume, bit rate, etc. Be sure to check out what those requirements are.
Tip 6:
Also, be sure to do some testing well in advance of any deadlines you may have. Trying to publish your song contest entry to SoundCloud shortly before it's due and discovering there's some incompatibility problem you have to figure out/fix is not fun. What you hear when you push play in your DAW is not guaranteed to be what you get when you render--that's why mastering exists. Testing in advance is a very good thing :).
There's an excellent tutorial about mastering using iZotope's Ozone plugin. I learned basically everything I know about mastering there :). It's run by an audio engineer, and after adding Ozone as the last effect I used on the master bus and running Ozone's Master Assistant, I more or less followed his recommendations.
I didn't have a reference track to make "Daisy Bell" fall in line with, so mastering for me was mostly about avoiding clipping from the volume going too high. I still had to watch for Studio One's red clipping warning indicator, but Ozone's gentle/intelligent volume adjustment/limiting was really helpful in avoiding clipping without wrecking the sound.
After all that work, I finally uploaded the finished cover to SoundCloud. You can hear the finished song below!
Ever since I started trying to make music with vocal synths, I've wished there were a tutorial that covered everything--the entire song making process from start to finish. With a little more experience behind me than I had before, I'm going to try to make a partial attempt at that here.
For this purpose, I'm going to describe everything I did to produce my cover of Harry Dacre's "Daisy Bell."
Since this is a cover, this description unfortunately won't cover anything regarding music theory or the creative process. But it may be helpful for people new to the hobby who want to know how to glue the different pieces of software together/how the actual production process works, at least from the perspective of someone somewhat less of a newbie than they are.
Finding Sheet Music and Checking Copyright:
After deciding to cover "Daisy Bell," the first thing I did was look online for its sheet music. I found some on Muse Score, but there are other places to look online, too. For instance, @parallax_fifths suggested the Petrucci Music Library, which offers a lot of public domain music.
That public domain bit was important to me, because covering songs can get into difficult legal considerations. Working with songs that are in the public domain helps avoid those problems. Fortunately, "Daisy Bell" is old enough that it's in the public domain, at least in the United States.
Making the MIDI:
With the sheet music in hand, I was able to transcribe it into a MIDI. I went very simple for this--just set the time signature in my DAW (Studio One), created instrument tracks that used PreSonus's Studio Grand piano (which seems to come with most/all copies of Studio One), and started copying music notes.
I made a track for each staff in the original score--a treble staff for the vocal melody, and a pair of treble/bass staves for the instrumental.
"Daisy Bell" is very convenient in that it consistently uses the same notes for the verses and choruses, respectively. So once a verse is laid out, you can copy it and use it over again for each verse. You can do the same thing for the choruses, too.
Tip 1:
I use Studio One's Duplicate Shared (Shift+D) feature for this work. To use it, click the region with the notes you want to duplicate and press Shift+D. The whole region will be duplicated, and any changes you make to either region will also occur in the other. The caveat to this approach is that if you do anything like humanization (a DAW function that slightly randomizes the note start times and velocities so they sound more like the notes were performed by a human), those changes will be carried over, too. So if you compared two verses side by side, for example, they'd sound identical, which wouldn't be very human. But once you're ready to humanize, you can always unlink the copied regions and humanize them separately.
Vocals:
To get the vocals started, I created another instrumental track that used Piapro Studio, then exported the MIDI from Studio One and imported it into Piapro. As a starting point, I simply typed in the English lyrics from the music score.
Tip 2:
It's worth noting that I made the MIDI in Studio One and imported it into Piapro. I had a bad experience in the past making the MIDI in Piapro and moving it into the DAW--for some reason, it made it sound like the attack of each note was really high, although the velocity was at the default. Not knowing where the issue was coming from, I couldn't really remove it. So now, I make my MIDIs in Studio One instead.
Tip 3:
After exporting the MIDI from Studio One, I could've deleted the track representing the vocal staff. However, I just muted it. I find it really helpful to be able to see the vocal melody while working with the instrumentals, and you can't see the notes inside Piapro while in Studio One. It's a bit of a pain to keep the notes in Piapro/Studio One in sync if you change the vocal melody at some point, but I find the inconvenience worth it.
Phoneme Replacement:
For good or ill, I tend to work on everything a little at a time rather than handling each area one at a time, so I did a first round of phoneme replacement at this point, although I think it might more traditionally be handled later. (Personally, I like feeling a little more comfortable with how the vocals sound--that way there's less distraction while working with the instrumentals.) Phoneme replacement is a technique where you manually replace the default phonemes chosen by your vocal synth with others that you think sound better. Working with Miku V4 English for the cover, mostly what I focused on were her vowels, especially her use of "i." I don't mind Miku's accent, but I tried to make her English for this cover as natural as I thought I reasonably could, and her "i"s are one of the parts of her English that are most distinctively pronounced differently from how a native would. There's lots of good information about phoneme replacement on the Vocaloid Wiki English Phonetics page, along with a really helpful chart of which vowel phonemes are similar to each other.
I would continue with a few more rounds of phoneme replacement later as I worked, gradually identifying pronunciation items I wanted to adjust and addressing them.
Instrumentals:
The instrumental sounded quite good with the Studio Grand, and it looked to me like the score I found was written with just a single piano in mind. However, I wanted to try out some other instruments. I decided to switch the treble staff over to a harpsichord, since I think its sound is linked in many people's minds with this sort of music from the 1700s and 1800s. Also, it has a higher-pitched sound, so it suits the treble part. Similarly, I switched the bass staff over to another grand piano library to see how it sounded.
At this point, I tried to do a few things related to the instrumental that I'd skipped earlier, too. Namely, I added some crescendos/descrescendos from the score in the form of gradual velocity increases/decreases, as well as some accents in the form of raised velocities on single notes. There were also some notes about gradually decreasing tempo, which I tried putting into Studio One's tempo track. However, once I added effects onto the Piapro track in Studio One, there seemed to be some kind of latency in Piapro getting the tempo changes synced to it, which made Miku skip words after the tempo changes. Since I couldn't really hear the difference with/without the tempo changes, and because I didn't want to remove the effects, I just removed the tempo changes.
I could have applied humanization to the instrumental (once I was sure the MIDI for the instrumental was the way I wanted it otherwise--it would've made changing the notes harder/more confusing to humanize before other adjustments were finished). However, when I tried it, I thought it made the harpsichord sound weird for some reason--afterwards, it sounded thinner and buzzier, and (understandably) the chords sounded weaker, since their notes weren't all getting hit at the same time. After trying it out, I left the instrumentals unhumanized, since they sounded better.
Effects/Mixing:
Originally, I thought you could get any sound/tone by changing a synth's built-in parameters. But now I think that the parameters are really used for fine-tuning. For the largest changes (like getting a synth's voice to go lower/higher than its optimal range, or maybe to apply other distortion more easily), it works better to use some kind of plugin/effect (something like Nectar/Neutron) and/or to use EQ. Once the effects have made the biggest changes in a broad way that doesn't make the voice incomprehensible, you then use the parameters to make little tweaks (on a per-note basis, if needed) to get things just right.
I like iZotope's Nectar for vocal work, since it comes with lots of effects that can be applied, and its Vocal Assistant can use machine learning to apply some of these for you and give you a good-sounding starting point. In versions higher than Elements, you can then go into the various effects it applies and change the settings to suit your preference, as well as add/remove effects or change their order. I used Vocal Assistant's Vintage preset on Miku, since it promised to enhance her mid-range frequencies and give her a warmer tone. I also put on a Nectar effect not suggested by Vocal Assistant: saturation. Saturation (like the tape saturation often discussed in various places online) relies on the idea that a "sound" (like a voice or instrument) is really a cluster of pure tones (a.k.a. single frequency sounds) heard together. Saturation fills in some of the frequencies between these component pure tones with tones at frequencies that are multiples of those pure tones--in other words, harmonics. By filling in those spaces, saturation makes the original sound sound richer.
Tip 4:
To learn more about mixing and some of the science behind sound, I emphatically recommend "The Art of Mixing." It explains what mixing is and at least one philosophy underlying it, what all the different kinds of mixing tools (like EQ, high/low-pass filters, reverb) do and basically how to use them, and some genre-specific mixing conventions. I really think everyone who is trying to make digital music ought to watch it, since it helps you better understand the theory behind what you're doing when using effects and things.
Similar to the vocals, I applied iZotope's Neutron to the harpsichord and grand piano and used a Vintage preset.
I made some broad adjustments to the instruments'/vocals' volume with Studio One's volume sliders, but to handle the rest, I put iZotope's Visual Mixer on the master bus. Visual Mixer communicates with the other iZotope plugins and controls their volumes remotely as you drag the items higher (louder) or lower (quieter). You can also apply panning by dragging the items left/right (although I didn't feel it would be beneficial with so few instruments), as well as fattening by making the items' icons in Visual Mixer wider. (Fattening basically pans an item to the left and right simultaneously so it sounds like it's filling the auditory space more. I did apply fattening to everything.)
Tuning/Special Effects:
I generally like how Miku V4 English sounds by default, but I've wanted to grow into tuning for awhile. So this was my first tuning experience. I basically read through the parameter descriptions on Vocaloid Wiki and considered how each might apply to the song in various places. It helped that I was quite familiar with the song myself by this point, so I could notice how the parameters might change while I was singing the song at various points. It might get tedious for everyone to try to describe everything from end to end, but briefly, I did things like:
- Reduce Brightness to soften some "i" pronunciations where phoneme replacement hadn't worked.
- Adjust Dynamics to correlate with pitch. For a high-pitched singer like Miku singing a simple/traditional/conventional song like this, I felt the level of "power" she exerted would rise in higher pitches and fall on lower ones. So I made her Dynamics generally follow her pitch changes. Sometimes I increased the Volume, as well, when I wanted even more power, or when I wanted an entire section to be louder while preserving the available range within which Dynamics could be adjusted.
- For a particular section where I wanted Miku to sound a little uncomfortable, I increased Pitch Bend Sensitivity to 1 semitone and drew some pitch bends to make her voice fluctuate. I also gradually decreased her Gender so that her pitch would rise. Finally, I gave her a fast Vibrato on one note for some finer quavering.
To be sure everything sounded consistent, I created a bus for Piapro and the audio track to share and moved Miku's Nectar plugin over to it. That way, Nectar would apply its effects to both.
Mastering:
With all of this work, the song itself could be considered to be "done." The final step is to make small adjustments to your song so that it fits the acoustic norms of its genre, if desired, and also so that it will sound good on the medium to which you're publishing. This is the job of mastering.
Tip 5:
As implied above, each platform you might publish to has different requirements for maximum volume, bit rate, etc. Be sure to check out what those requirements are.
Tip 6:
Also, be sure to do some testing well in advance of any deadlines you may have. Trying to publish your song contest entry to SoundCloud shortly before it's due and discovering there's some incompatibility problem you have to figure out/fix is not fun. What you hear when you push play in your DAW is not guaranteed to be what you get when you render--that's why mastering exists. Testing in advance is a very good thing :).
There's an excellent tutorial about mastering using iZotope's Ozone plugin. I learned basically everything I know about mastering there :). It's run by an audio engineer, and after adding Ozone as the last effect I used on the master bus and running Ozone's Master Assistant, I more or less followed his recommendations.
I didn't have a reference track to make "Daisy Bell" fall in line with, so mastering for me was mostly about avoiding clipping from the volume going too high. I still had to watch for Studio One's red clipping warning indicator, but Ozone's gentle/intelligent volume adjustment/limiting was really helpful in avoiding clipping without wrecking the sound.
After all that work, I finally uploaded the finished cover to SoundCloud. You can hear the finished song below!
Mobius017's Covers
This is going to be the thread where I share any covers I make! I was inspired to work on my first cover by the comment @Aia made awhile back about "Daisy Bell" being sung by an IBM. Without further preamble, here is Hatsune Miku covering "Daisy Bell." You might notice a bit of warbling...
vocaverse.network