• We're currently having issues with our e-mail system. Anything requiring e-mail validation (2FA, forgotten passwords, etc.) requires to be changed manually at the moment. Please reach out via the Contact Us form if you require any assistance.

Fattening vs Stereo Spreading with Stereo Imaging Plugin

mobius017

Aspiring ∞ Creator
Apr 8, 2018
1,982
A quick mixing question here: is there any difference between fattening and using a stereo imaging plugin to increase stereo width?

I assume maybe
  • There's a difference of degree (fattening being more subtle, and the stereo imaging being more extreme)
  • The imaging plugin could do more artistic things, like...I don't know...moving the sound all the way to one side? (Though that's just panning....)
    • I guess that opens the question: what do people use stereo imaging plugins for, if not for fattening...?
 
  • Like
Reactions: Buck

inactive

Passionate Fan
Jun 27, 2019
179
"Fattening" usually refers to saturation, which is subtle distortion. The difference between "saturation" distortion and "distortion" distortion is purely subjective, but generally once the saturation becomes obvious, it has moved into the world of distortion.

Dan Worrall talks about saturation vs. distortion in the podcast below. (Hopefully the timestamp works. If not, skip ahead to 14:46.)

If you're interested, GSatPlus from TBProAudio is free and supports up to 4x oversampling. (Oversampling, in my opinion, is a must for non-linear plugins, such as saturation, distortion, and compression.)

Another way to fatten tracks is with layering and/or double-tracking of instruments. Slightly detuning one of those layers can also add thickness.

Stereo imaging is kinda vague. Stereo delay effect? Haas effect? Or, as you've mentioned, panning? Importantly, all three of these methods are linear processes, and therefore do not need oversampling. But some stereo widener effects can create mono-compatibility issues, especially the Hass effect. However, the importance of mono-compatibility is purely up to you.
 

IO+

Resonance47
Apr 22, 2021
239
I think, fattening term is refer to very small delay between each channel (might have other modulation involved), that give you an illusion to perceive the sound "fatt" some people call "thick", even they're the same amplitude, it might even make you feel louder.

Let say L channel space out from R channel by 12 millisecond and R channel space out from L channel by 20 millisecond (may have a slightly detune). A few milliseconds, aren't perceived as echoes or flams, but as changing the apparent position of the source signal, but we have plugins that can do such thing call Doubler. A very tight delay can make anything sound dense.

Also a traditional delay or manually adjusting time of your source can get the job done too.
You might try to add some modulation thing like detune, with small amount of distortion or saturation can make a big different.

I think that's why analog echo tape like echoplex ep3 is so popular due to how wonky of their sound.
____________

Stereo imaging: tl;dr: Stereo Imaging is the manipulation of a signal within a 180-degree stereo field.
I wish i have time to make a picture.

A Stereo Image can be broken down into 3 distinct sections:
The center 90 Degree image
The Left 45 Degree Image
The Right 45 Degree Image

The majority of your signal will be located within the first section – that is the center 90-degree image.
A traditional pan pot that you use will operate within this 90-degree image, say if you were to pan a guitar hard left, it would be to the full left side of this 90-degree stereo image.

In order to access to the 2nd and 3rd sections of a 180-degree stereo image, this is where delay and phase cancellation comes into play. By separating a mono signal into two separate signals, and then delaying one slightly (or significantly), you cause phase cancellation and in turn, widen the stereo image.

This phase cancellation results in the widening of the signal into various parts of the full 180-degree stereo image. The degree to which the signal is widened depends on multiple variables, including the frequency and amplitude of the signal, and the amount of delay used.
 
Last edited:

Buck

Aspiring Fan
Apr 8, 2018
27
The term "fat" isn't really used in the same context as "wide" or "thick" in the sense of pads and layers and stuff.

"Fat" is more of a description of a sound with a lot of compression and distortion to reduce its dynamic range. It particularly applies to the low end, especially in the realms of EDM and Hip Hop production. Saturation works to this end.

This is usually seen as a positive because a sound with lower DR will be perceived as louder than the same sound with higher DR at the same peak level. It's also just kinda neat.

The plugin "Sausage Fattener" by Dada Life is a compressor + distortion designed with this in mind. Its name is derived from the idea of compressing a sound to the point its waveform looks like a sausage. This is obviously a joke, but it exemplifies the spirit of the term "fat"

Stereo width is more to do with the difference between the left and right stereo channels as the two above me have pointed out. A stereo imaging plugin will do this, but this actually comes at the expense of perceived loudness most of the time. Most people would rather layer, double-track or use methods like the unison that synthesizers sometimes have to achieve sounds that are wider. This is also why people prefer to leave the low end more mono than high end frequencies. The center-ness of the bass makes it seem louder (regardless of its peak level), which gets you fatness points.

This is one use of stereo imaging plugins I've seen, particularly in Ozone. In mastering, some people will widen the high end a bit while making the low bass (like, 100 down) more mono.

So there is a difference, particularly if you are looking at plugins that claim to get you "fat" sounds (they are probably compressors).

P.S these production threads are fun
 

inactive

Passionate Fan
Jun 27, 2019
179
This is also why people prefer to leave the low end more mono than high end frequencies. The center-ness of the bass makes it seem louder (regardless of its peak level), which gets you fatness points.
The reasons for leaving bass frequencies in mono are more nuanced that fatness and/or loudness.
  • Humans can’t determine the stereo location of low-frequency information, so there’s no need to have stereo information below ~80 Hz.
  • The frequency information in music tends to slope down toward the high end, leaving more energy in the low end. If these energetic low-frequencies are in mono, it will benefit the mono-compatibility of the aggregate signal.
  • Bass frequencies take more time to complete a cycle. As a result, stereo phase-cancellation issues in the low end are more egregious than they are in the mid or high ends. (This relates to the above point.)
  • If you’re going to print onto vinyl, everything from about 200 Hz and below (I’ve even read 300 Hz and below) will be run through an elliptical filter to mono-ize the low end. This is done because if high-energy bass frequencies are in stereo they can supposedly push the needle out the side of the groove, causing the record to skip. The mastering engineer will do this mono-ization (in addition to the necessary RIAA equalization). Having said that, I have no experience with this because I don’t give a flying poop about vinyl.
  • Finally, I’ve read here and there that a mono bass works well with the psychoacoustic modelling of lossless compression, but I have yet to find any reliable sources on this aspect. And to be honest, I haven’t looked.
There’s probably other reasons, but the above are enough for me to be convinced! Although I don’t mono-ize so much as I’m just (semi-)careful about what stereo information gets placed in the low end.
 

mobius017

Aspiring ∞ Creator
Apr 8, 2018
1,982
Wow, there's just a ton of useful info in this thread! Thank you!

It looks like this is one of those times where a term is used to mean multiple things. "Fattening" is apparently used both for 1) sort of the density of sound information, to paraphrase what @parallax_fifths and @Buck were describing, and 2) the stereo placement/delay technique @IO+ was discussing that results in a "bigger" sound.

I had encountered the latter of these in "The Art of Mixing," but reading through the thread, I realized that I'd forgotten about the delay component. I think that was the reason I was confused about how stereo imaging (which I was thinking of mostly in the context of stereo width) was different. Since the stereo/delay trick is basically putting a ~30ms delay on a mono signal and panning the original signal and the delay copy of that signal to the L/R, respectively, if you remove the delay part of this technique, you end up with just stereo placement considerations. :)

If you're interested, GSatPlus from TBProAudio is free and supports up to 4x oversampling. (Oversampling, in my opinion, is a must for non-linear plugins, such as saturation, distortion, and compression.)
Definitely interested! I've had some positive results putting saturation on my vocals, so I'll check this out!

I think, fattening term is refer to very small delay between each channel (might have other modulation involved), that give you an illusion to perceive the sound "fatt" some people call "thick", even they're the same amplitude, it might even make you feel louder.

Let say L channel space out from R channel by 12 millisecond and R channel space out from L channel by 20 millisecond (may have a slightly detune). A few milliseconds, aren't perceived as echoes or flams, but as changing the apparent position of the source signal, but we have plugins that can do such thing call Doubler. A very tight delay can make anything sound dense.

Also a traditional delay or manually adjusting time of your source can get the job done too.
You might try to add some modulation thing like detune, with small amount of distortion or saturation can make a big different.

I think that's why analog echo tape like echoplex ep3 is so popular due to how wonky of their sound.
____________

Stereo imaging: tl;dr: Stereo Imaging is the manipulation of a signal within a 180-degree stereo field.
I wish i have time to make a picture.

A Stereo Image can be broken down into 3 distinct sections:
The center 90 Degree image
The Left 45 Degree Image
The Right 45 Degree Image

The majority of your signal will be located within the first section – that is the center 90-degree image.
A traditional pan pot that you use will operate within this 90-degree image, say if you were to pan a guitar hard left, it would be to the full left side of this 90-degree stereo image.

In order to access to the 2nd and 3rd sections of a 180-degree stereo image, this is where delay and phase cancellation comes into play. By separating a mono signal into two separate signals, and then delaying one slightly (or significantly), you cause phase cancellation and in turn, widen the stereo image.

This phase cancellation results in the widening of the signal into various parts of the full 180-degree stereo image. The degree to which the signal is widened depends on multiple variables, including the frequency and amplitude of the signal, and the amount of delay used.
I'm a little confused. These two items (below/above the line) seem like they're similar to me. Wouldn't the technique above the line would have the same mono phase cancellation behavior as seen in the technique below the line? So are these two different ways of describing the same thing, or is there a difference between the two that I'm not understanding? Not that there's any issue either way; I'd just like to maximize what I'm learning :) .

Stereo width is more to do with the difference between the left and right stereo channels as the two above me have pointed out. A stereo imaging plugin will do this, but this actually comes at the expense of perceived loudness most of the time. Most people would rather layer, double-track or use methods like the unison that synthesizers sometimes have to achieve sounds that are wider.
That's good to know. Definitely true; you can hear how widening makes the sound airier/less dense and quieter, so considering those other options as alternatives will be really helpful. Then again, if I'm looking for an airier sound, taking the opposite approach is something to keep in mind.... Thank you!

All the discussion about the amount of mono/stereo in low/high-pitched frequencies will be very helpful, too!

Thanks very much to everyone for all of the useful discussion!
 

Nokone Miku

Aspiring Lyricist/Producer
Jul 14, 2021
76
www.youtube.com
Oh my god, I spent dozens of hours during November working on this same sort of thing!

For human vocals you can record two nearly identical takes and play one in the left channel and the other in the right. I tried doing this with my synth vocal track by adjusting the tuning on a second track. Messing with the Dynamics and Brightness or shifting the Portamento slightly in various parts. It kinda, sorta worked. But wasn't giving me the results I wanted.

I've often seen it recommended to use the "Doubler" VST plugin for when you want that effect but only have one take. I didn't get ahold of the plugin so I'm not sure how well it works on synth vocals.

You can shift the phase of one channel. But that only works in stereo. When played back in mono it sucks the life out of it because the different phases of each channel cancel each other out. I discovered the original instrumental track for "Lost One's Weeping" has this issue. The guitar sounds nice and full in stereo but loses everything played in mono! This is important when so many people listen to music on their phone which often only has a mono speaker on it (although it should be in stereo if they are using headphones or a stereo Bluetooth speaker).

I tried using different compressor settings on the left and right channels, but that just made it sound "different" not "better." Same thing when I tried using V3 Miku in one channel and V4 Miku in the other. Just "different" not "better."

I tried fractionally delaying and/or detuning one channel but it didn't make much of an impact unless you pushed it to the point where it becomes too noticeable or distracting. I tried different amounts of reverb on each channel, but in general I prefer delay over reverb (at least on vocals). I tried a few different saturation plugins but the effect they have on synth vocals is either too subtle to notice or bordering on distortion, and I didn't want distortion in the natural-sounding vocals I was going for. There are so many other things I tried that aren't worth mentioning.

What finally worked for me was having a mostly neutral EQ'ed center channel with additional left and right channels that have been EQ'ed with different curves to each other and then recombining them. It also helped temper the VST effects I was using on the center channel and kept the natural sound I was going for. So this shows the basic idea of the stereo mix I did for my cover of "Lost One's Weeping" in order to get a fuller, strong but natural sound:

stereo_width_libreDraw.jpg

The song sounds okay with the center channel on its own. But it sounds much better with the additional left and right channels mixed in.
 
  • Wow
Reactions: mobius017

mobius017

Aspiring ∞ Creator
Apr 8, 2018
1,982
@Nokone Miku: That's really interesting! I wouldn't have thought of splitting the vocal that way for the purpose of fullness, but that does sound like it would work.

I've heard of recommendations to use a plugin called Doubler, but I'm not sure where it comes from. Is it this one from iZotope?

This might be getting too theoretical, but why do delay/phase cancellation cause people to perceive the sound as being wider? I'm guessing it's because you assume that a slightly different signal is going to the left/right ears, and the cancellation causes gaps to appear in one that are filled by the other. So when your brain processes them, it sort of unifies them into a sound that seems to be from all around you?
 

Nokone Miku

Aspiring Lyricist/Producer
Jul 14, 2021
76
www.youtube.com
This might be getting too theoretical, but why do delay/phase cancellation cause people to perceive the sound as being wider? I'm guessing it's because you assume that a slightly different signal is going to the left/right ears, and the cancellation causes gaps to appear in one that are filled by the other. So when your brain processes them, it sort of unifies them into a sound that seems to be from all around you?
In-person when you are listening to sound, the sound waves are reverberating around the space and being reflected off of surfaces at various angles. And because our ears are on the sides of our heads, the sounds reach them at different angles, different times, and have slightly different qualities imparted to them by whatever they've reflected off of. So by making the sound different in each ear it tricks our brain into thinking the sound is coming from all around us instead of whatever speaker or headphones we're listening to.
 
  • Like
Reactions: mobius017

inactive

Passionate Fan
Jun 27, 2019
179
One thing to remember is that width/stereo perception changes with frequnecy range. At low frequency, we can't really tell direction. In the mid frequencies we rely more upon left and right phase differences. But at higher frequencies we rely more upon left and right loudness differences. And at what frequency these changes occur depend upon your physiology.
 

Users Who Are Viewing This Thread (Users: 0, Guests: 1)