• We're currently having issues with our e-mail system. Anything requiring e-mail validation (2FA, forgotten passwords, etc.) requires to be changed manually at the moment. Please reach out via the Contact Us form if you require any assistance.
IO+
Reaction score
1,989

Joined

Profile posts Latest activity Postings Media Albums Resources About

  • Audio Myths-Busting series EP.3 "Trust your ears!" (Q&A Clarify Version)
    *Beware wall of text* (as always)

    "Unfortunately, when you do trust your ears, you're not just trusting your ears"

    Q: Trust your ears guide is actually helpful?
    A:
    In my personal experience, Advice like this is undoubtedly the most common and unhelpful piece of non-advice you'll find online is 'trust your ears', which is trotted out with tedious regularity by people who don't actually have a useful answer to your question.

    If you read my Audio myths-busting series you know that most adults can’t hear much above 16 kHz, People will lose their ability to detect higher frequencies as you get older, can be affected by conditions such as having a cold or earwax buildup, and suffer from fatigue - but also your audio interface, monitor quality and positioning, and the shape and material of your room and everything in it. This seems sensible enough.

    Q: Can I truly trust my ears?
    A:
    One of the absolute most frustrating things about audio production is that you simply can’t trust your ears. The human ear is not a very reliable tool for knowing if your tracks are sounding great or not, plain and simple. Our ears adjust, and our brains compensate for things we’re hearing and very quickly we lose perspective on just what exactly our audio is sounding like.

    Myself learning this the hard way... it's very depressing if you're not get used to it, but it just the way it is, This is how our brains work. Accept it and working around them.

    Q: If i can't trust my ear, what i can do?
    A:
    One of the best things you can do to reset your ears is to play your mix through a very different set of speakers, cheap earbud, laptop speakers or even your phone.
    Try it again with some headphones. Your ears will be fresh to the way your cans sound. But of course after a minute (or less) they will get used to that sound and you’ll lose some perspective. By switching speakers (even to cheap speakers) you force your ears to “wake up” and start paying close attention to the frequency balance. This state of resetting is so helpful in giving you information you need to make sure your mix is where you want it to be, and you don’t have to spend a lot of money either. Just listen to your mix on something other than your main speakers every now and then, and you’ll be better off for it.

    A: Reference tracks is very handy when it come to checking the mix, By simply opening up a pro mix and listening to that for a minute, we can quickly regain perspective on what does sound good and how the mix is represented on our system.

    A: In my experience. listen at lower volumes does help immensely. When we turn up audio our ears over emphasize the high and low frequencies, somewhat resembling what stereos do to make your music sound “better” and more “exciting.” That being said, it can be very misleading if you only mix at loud volumes. You think your mix is sound so good, but when it’s played back at a moderate to quiet volume everything falls apart. The solution is mix a much quieter volume. Of course, it’s always good to play back your mixes at a few different volume levels to gain perspective (because remember, you can’t trust your ears), but in general keep it low when you mix.

    Q: Psycho-acoustic are real?
    A:
    Yes. The Psycho-acoustic, sound perception is real, this is how humans perceive various of sounds. The human ear can nominally hear sounds in the range 20 Hz (0.02 kHz) to 20,000 Hz (20 kHz), this is our limits of perception. The upper limit tends to decrease with age; most adults are unable to hear above 16 kHz. The lowest frequency that has been identified as a musical tone is 12 Hz under ideal laboratory conditions. Tones between 4 and 16 Hz can be perceived via the body's sense of touch.

    Conclusions (If you read all of this, Please don't take everything too serious)
    This is why professional studios are designed by acousticians, treated with specialised materials, and use high-end monitors and visual analysis tools.

    My advice.
    Trust your ears, but only up to a point. Rest them, use reference tracks, check your mixdowns on as many reproduction systems as possible, and grab your trustworthy spectral analyser to keep an eye on those hard to monitor bass tones and rogue ringing frequencies.

    If you don't have one, take a look here: My recommendation Plugins for Audio Production
    You are not listening through the speakers.
    You listening to the sound of your room.

    Hope this help.
    Mixing contest week 2... only a handful of contestant still in active.
    I read a lot of rant (almost all of them) on private forum , In conclusion.
    - Most people desire to quit because, they know "this is too much for him" and quit half way. (Technical stuff are not fun sometime.)
    - Some people quit because they losing confidence rapidly during work and having pressure build-up, and go into mind-break state.
    - A few people having overconfidence and then Receiving poor Feedback, result in rage quit.

    Summary: Yea... that's how it goes.

    Bless all the guys and girls out there still fighting. a lot of prize are await.
    The interesting thing is that some artist do make vocal off grid (swing) purposely.
    that's make sense... i will try that.
    I should use it more.
    Playing with the velocity (VEL) and EVEC can create a better control of the consonant's length, impact and tension. When it's work, it sound so good.
    also some voice color like Hard2 are work wonder at high pitch, Dark, also do good at mid - low pitch, if playing with velocity.
    But not all the word are working perfectly since luka always have stutter problem. especially with all the misc phonetics stuff like Sil and Asp, it's broken sometime on luka.
    The Sil phonetic breaks the transition too early or too fast and inaccurate, The Asp phonetic is not accurate sometime. It can not correct some choppy phoneme combinations, and force the user to do a lot more like using the pencil tool and draw the dynamic parameter or simply cut it in the DAW.
    VyNancyV6
    VyNancyV6
    Any tips for Miku EVEC?
    IO+
    IO+
    Aww I'm sorry, I don't have miku vb at the moment.

    But generally Velocity are more than just control the length of the consonant's it's control the Accent too.
    Audio Myth-Busting series EP.2 "Which DAW Sounds The Best?"
    *Beware wall of text*


    I do this experiment for around 3-4 years ago, back in the day i only have cubase 5 and Adobe Audition 1.5

    Ah shit, here we go again.
    To Be Honest, I'm so tired of this endless DAW of WARS, That has been around over decade.
    If you’ve found yourself caught up on forums arguing the toss about which DAW sounds best, then start watching videos of kittens licking their balls, it’s a better waste of time.

    The Underlying Math and Audio Quality Aspect of DAWs
    Fundamentally, DAWs are calculators at heart. They take multiple audio streams, add them together, and give you the results via your monitors or a bounced audio file.

    long story short: In the past audio engines used fewer bits for calculating math operations for all digital signal processing and sound generation. However nowadays software companies need to use the highest practically possible bit depth to stay current, which is 64-bit float. (my obsolete Cubase 5 run at 32-bit floating point)
    However, some producers - including experienced audio professionals who you'd like to imagine know better - for some reason think they run on voodoo or some other arcane art.

    Basic Thinking
    My suspicion and thinking is that basically all the latest DAW programs are the same in basic mixing capabilities.
    Maybe if there is some difference it should be so minimal that it can't be heard or only in phase cancellation tests or maybe less than -70 to -90 decibels. So I'll explore this topic.

    Material for Testing
    • Microsoft Windows 7 (64-bit), Mac OS Version 10.6: Snow Leopard
    • Cubase 5 (32-bit)
    • Reason 5 (32-bit)
    • Pro Tools 10 (32-bit)
    • Cakewalk Sonar (64-bit)
    • Studio one 4 (64-bit)
    • Reaper 6 (64-bit)
    • Audacity 2.0.6 (32-bit)
    • My finish multi-tracks session 66 tracks (24/48kHz)

      Note:
      • no plugins on master buss
      • no plugins on instrument channels
      • default volume 0 dB or Unity Gain on all the channels
      • no automation
      • stereo pan law: equal power
    Render Settings
    I used these render settings:
    • output: master output
    • file type: WAV
    • sample rate: 44.1 kHz
    • bit depth: 24 bit
    • dither options: no dithering
    Phase Cancellation Test
    I set out to find the difference of the mixing engines.
    First, I rendered the multi-tracks in Cubase 5, Reason 5, Pro Tools 10, Cakewalk Sonar, Studio one 4, Reaper 6 and then made a phase cancellation test in Audacity. The cancellation was perfect so there were no differences, result in silent sound.

    Conclusion

    There are no differences when:
    • no stock plugins used
    • no automation used
    • no warping used
    • no different stereo pan law
    There can be differences when:
    • stock plugins used
    • automation used
    • different warping used
    • different stereo pan law

    You can do this experiment successfully, just be honest to yourself.
    Read more about it: DAW wars
    Audio Myth-Busting series EP.1 "Are high sample rates mean better audio quality?" (Q&A Clarify Version)
    *Beware wall of text*

    Q: Are high sample rates mean better audio quality?
    A
    : Not always true. unless you have 3,500$+ HDX Converter

    Q: How is that possible?
    A:
    In the nutshell, many “affordable” soundcards have a non-linear response to high frequency content. Meaning that even though they are technically capable of recording at 96 kHz and above, the small benefits of the higher sample rate are completely outweighed by unwanted “inter-modulation distortion” in the analogue stages.

    Q: What’s the point of high sample rates anyway ?
    A:
    The sample rate determines how many samples per second a digital audio system uses to record the audio signal. The higher the sample rate, the higher frequencies a system can record. CDs, most mp3s and the AAC files sold by the iTunes store all use a sample rate of 44.1 kHz, which means they can reproduce frequencies up to roughly 20 kHz.
    A: Testing shows that most adults can’t hear much above 16 kHz, so on the face of it, this seems sensible enough. Some can, but not the majority. And examples of people who can hear above 20 kHz are few and far between. And to accurately reproduce everything below 20 kHz, a digital audio system removes everything above 20 kHz – this is the job of the anti-aliasing filter.
    A: But a fair few musical instruments produce sound well above these frequencies – muted trumpet and percussion instruments like cymbals or chime bars are clear examples. This leads to two potential objections to a 44.1 kHz sample rate – first, that in order to reproduce a sound accurately we should capture as much of it as possible, including frequencies we probably can’t hear. There are various suggestions that we may be able to somehow perceive these sounds, even if we can’t actually hear them. And secondly that depending on the design, the anti-aliasing filter may have an effect at frequencies well below the 20 kHz cut-off point.

    Q: So why NOT use higher sample rates, then ? Back when CD was released, recording at 96 kHz or above simply wasn’t viable at a reasonable price, especially not in consumer audio. Times have moved on though, and these days almost any off-the-peg digital audio chip is capable of at least 96 kHz processing, if not higher.
    Q: Now these files take up much more space than simple 44.1 kHz audio, but hard drive space is cheap, and getting cheaper all the time – why not record at 96 kHz or higher, just in case either of those hotly debated arguments really does carry some weight ?

    A:
    The answer lies in the analogue circuitry of the equipment we use. Just because the digital hardware in an interface is capable of 96 kHz or higher audio processing, doesn’t mean the analogue stages will record or play the signal cleanly. It’s quite common for ultrasonic content to cause inter-modulation distortion right down into the audible range. Or in simple English, the inaudible high-frequency content actually makes the audio you can hear sound worse.

    Q&A controversial question
    Finally, the fact that ultrasonic content can potentially cause inter-modulation distortion and make things sound different even when they shouldn’t raises a tough question. Are all the people who claim to be hearing improved quality at 96 kHz and above really hearing what they think they are ? Or are they just hearing inter-modulation distortion ?

    Summary
    Maybe 48 kHz sample rate is actually good enough ?
    "maybe yes & maybe not" everything it's come down to....

    What's your project you're working with?

    Sound design, organic orchestra, audiophile jazz, classical, world music and send it to the studios or you having HD system?
    Then, Yes 96kHz is better, Because Film and TV Broadcasting often require that recordings be at a much higher sample rate (even higher than 96kHz sometime)

    Nah I just do collab, create music and fun stuff.
    Working at high sample rate, is quite a lot of hassle, Exporting x4 time more data, CPU need to work x10 time harder.
    If you gain very little to nothing from it. Why create the hassle in the first place?
    I forgot this. Homing Echo aka kei (hayashi kei), finally he's comeback after 3 years.
    His song is known for inspiring, but not so cheerful, having iconic J-Rock, Pop rhythm

    Not quite famous but that's doesn't matter to me(as always)
    He's one of the few producer that have few to no connection when it come to producing (one man company)
    Recommended.

    Not a famous artist, but still have special place in my mind.

    if you like 80's-ish tone and big-band jazz, ballad like feels
    please check it out.
    sketchesofpayne
    sketchesofpayne
    Glad you introduced me to this producer. I listened to the stuff on their channel. It has that lounge jazz feel to it, and Luka's voice fits it really well.
    • Like
    Reactions: IO+
    IO+
    IO+
    I'm very happy to hear that!

    While i am posting any song, i make sure that it's was from official channel, or at least i included the official website or twitter. I thought this is quite valuable for the true listener like you guys.
    Mixing contest this day... they have no mercy at all.
    The number of contestants quickly drop so fast, I hear about their critique are cruel and cold, and few people are quit.

    What's the point of this? and it's just only a first week...
    Once they said.. "we don't have enough... we need more engineer" i'm not sure what about critique those old man are giving inside that video call room.

    I kind a feels sorry for those guys and girls, I hope they will understand and move on.
    I update the 2 new Monitoring Tools
    Please check it out.

    The SPAN FFT spectrum analyzer and dpMeter5 Loundness multi channel meter.
    If you have any question or comment, Please Join the Discussion.

    I might do the series of Audio Myth-Busting and Audio plugins analysis after i have spare time.

    This getting busy due to jobs and mixing contest. So I can't do any update on resources at the moment.
    Mixing request & Commissions are CLOSED at the moment.

    Thank you.
    I don't know where to place this so i put it here. It's about Phase cancellation.
    Note: This post is not about plugins, Just a common basic The Physics of Sound.

    Phase cancellation is generally something to be avoided when working with audio, but it can also be taken advantage of for its ability to isolate differential signals within two otherwise-identical mixes.

    When two identical signals are perfectly in phase and played together, they double in volume as they reinforce each other. If we then shift the phase by a degrees until it hit 180 degrees(full half cycle), the two signals cancel each other out completely, resulting in silence.

    For better understanding please read about Wave interference


    When abuse phase cancellation to some degree.

    I make experiments long ago (4-5yrs) it's not perfect 100% but it show how destructive the wave interference can do.
    I create a studio essentals guide for beginners.

    I update the 2 new Monitoring Tools
    Please check it out.

  • Loading…
  • Loading…
  • Loading…
  • Loading…
  • Loading…
  • Loading…