@Luxie: I wanted to elaborate a bit on my answer from earlier (which might've sounded a bit terse). It'll get back into some sound theory, though.
To begin with, if you pluck a piano string, it makes a sound because it vibrates down its entire length. The A4 piano string vibrates (for the most part) at a frequency of 440 Hz, which is regarded as basically the identity of that note from a frequency perspective.
The thing is, though, if I reach in with my other hand, pinch that string in the middle, and pluck one of the halves (so only 1/2 of the string is vibrating), I'll get a higher sound. Specifically, with only half the length of the string available to absorb the energy of my plucking it, it will vibrate twice as fast. This will produce the note A5, which vibrates at a frequency of 880 Hz (which is double A4).
Now think back to A4. When I plucked the string, I made the entire string vibrate. We say that it produces the note A4/440 Hz, but really, it's like I plucked the string simultaneously at every possible point along its length. So it's actually producing A5 as well, although more quietly. In this scenario, we say that A5 is a harmonic of A4; A4 is the dominant note being produced, but A5 (whose frequency is a multiple of A4's) is present also.
So far, we've only considered the piano string. But the string isn't isolated. The string's vibration actually spreads through the entire piano body and sets it to vibrating somewhat, too. Those vibrations contribute their own sounds, also.
The point I'm making is that every sound is actually not just a pure tone (A4/440 Hz), but a cloud of sounds. In an instrument, which has been engineered for making music/to respect the tonal system, these sounds are predominantly grouped around the frequencies of the notes of that tonal system and those notes' harmonics, but other extraneous sounds/frequencies are introduced also. A vocal synth is the same--the samples are of a human vocalist's vocal cords, singing on-key, so you get primarily sounds that correspond to notes, those notes' harmonics, and the extraneous nuance noises introduced by the vibrations of the other parts of that person's body.
(Digital synthesizers like Vital or Serum can be an exception to this, because they can produce pure tones (e.g., exactly 440 Hz and nothing else). Digital musicians then often go through a bit of work to make those sounds less sterilely perfect. But that isn't really relevant to our discussion here.)
What an equalizer does is allow you to manipulate the volume of different sounds within the cloud of sounds that compose every sound. For example, let's say I ask Miku to sing A4. If I put Nectar on it and put on the equalizer module, I can see a graph that shows the cloud of sounds she makes. Across the bottom are the frequencies, and going up the left is the amount of volume (a.k.a. gain). So what I'll see is a thick cluster of loud (high-gain) sounds around A4, with a more sparse collection of sounds of lower gain trailing off on either side. Having seen Miku's voice in EQ many times, I can tell you there will be more sounds going toward the higher frequency than toward the lower one, because her voice is high-pitched.
Let's say that I wanted Miku to sing A4, but I wanted her voice to come out slightly deeper than it does naturally. In the EQ, I could click on the equalizer graph to add a node somewhere among the deeper frequencies (i.e., the lower harmonics of A4). This node will be the highest point of a bell curve of gain adjustment. By dragging the node up or down, I can adjust the gain of the frequencies that fall within it. By making those harmonics louder or quieter, I can make her note come out deeper or higher without changing the fundamental sound of the note--it's still at A4, some of the harmonics within it are just louder/quieter. In this case, I would drag the node up, which would increase the gain of the deeper harmonics and make her A4 sound deeper.
A compression plugin is also about adjusting gain. iZotope's compression plugin, similar to the equalizer, will show you a chart of the sound--in this case, time is across the bottom, and gain is once again on the left. A compressor is really about making a sound quieter (i.e., reducing gain). It lets you dictate that sounds that go above a certain level of gain should automatically have that gain reduced according to a ratio (E.g., for every 2 db of gain that comes in from the source sound, I want only 1 db of gain to go out.). So, in iZotope, when you're looking at the graph, what you can do is drag down a line that represents the threshold gain level. Do this until the sounds you don't want to affect are below it, but sounds loud enough that you want to affect them are above it. Then, set the attack/release times (I.e., how quickly you want the compressor to respond to a sound's gain going over the threshold, and how quickly you want it to stop reducing the gain once the sound has fallen below the threshold again. Shorter times for attack/release will make a harsher sound.). Then set the knee; the knee is sort of similar to attack/release and has to do with if the compressor makes its changes quickly or gently, but I can't explain it very well. You're better off playing with it to see what sounds best to you.
Now, how does a plugin devoted to reducing instances of high-gain actually let you make a sound louder? This has to do with the maximum gain that a medium like a .wav file can accept. If you go over the file's maximum amount of gain, you'll get a phenomenon called clipping. Clipping produces a nasty sound, and you want to avoid it, which is why your DAW will typically have some kind of indicator (usually a red light) on any gain/volume meter that tells you when your sound gets too loud. Getting back to the compressor, once you've reduced the gain of the places where a sound gets louder than you'd like, you're then able to increase the gain of the entire compressed sound as a whole without having those instances of high-gain go over the file's maximum gain and cause clipping. Additionally, in the case of making your vocal more powerful, compression is also having the effect of making the quieter parts of your vocal more similar in gain to the louder parts, which makes the voice sound stronger, as well.
This tutorial does a good job of explaining compression, too:
What is compression? How to use compressors in music production - Blog | Splice
Saturation has some similarity/relationship to EQ also in that it deals with harmonics--it's basically injecting harmonics/noise that correspond to the sounds present in the original sound. So, for example, when Miku sings A4, it's adding in additional sounds at A3 and A5 that weren't there before. This makes those areas of the sound sound stronger/thicker/richer and also makes the sound more interesting to people's ears; digital musicians use it to help make a digital sound (e.g., from Serum/Vital) less perfect/more rich. Actually using a saturation plugin will vary based on what plugin you're working with, but it's generally pretty simple; in iZotope, you would add the saturation module, add a node to a graph of the sound (you will be focusing the saturation to be predominantly at a particular frequency), choose what kind of saturation you want to apply (there are a few kinds, like tape or triode, meant to mimic older analog/physical audio technologies that added noise naturally; I can't offer much guidance for picking between these except to pick whichever one sounds best to you; I personally like tape), and then drag the node up/down to adjust the gain and set how much saturation you want to apply. Be careful not to add too much (too much saturation will sound crackly), but add enough that it sounds good to you.
And, as I learned in the thread I linked to before, don't mix in isolation; any of these adjustments might change depending on what's going on in your song around your sound (in this case the vocalist) at the time.
There are other tutorials for each topic that will probably explain these topics more clearly/in more detail than I did, but hopefully this overview will be useful.