Jump to content

Mixing sounds uneven in my music/remixes.


Recommended Posts

While I'm starting to get the hang of composing music and/or remixes thus far, I think I need some help learning how to properly tweak my mixing. At most times, it sounds pretty uneven or muddy with its panning, reverb, frequency, etc. But especially when I have too much reverb going on in the background.

Anybody here have any advice on how to improve my mixing? Again, thanks in advance!

Link to comment
Share on other sites

^ Those points of "advice" are so general and devoid of details that they are completely useless. I have to wonder why you bothered writing anything at all.

Actual advice:

Keeping frequencies and things like that "even" is a persistent challenge in audio that is pretty much the entirety of mixing in the first place. By NOW I've been doing this long enough and have gotten good enough (especially compared to my first Ocremix here) that I can throw around some weight in authority on the subject, while still "hillbillying" my way through it since I still lack formal training beyond spending 15 years trying to get better obsessively. That is the pretext for me telling you what I do to achieve results:

1. EQ can be simplified here (in my own personal, unpatented and unofficial model) as three things: The Floor (the bass), the Body (the mid) and The Voice (the highs). The bass literally brings to mind a stage floor for the performance on the stage. The Body is the "stage presence" and gives the soundscape a kind of "muscle" to the overall sound. Take down a whole bunch of the mid in your EQ to hear what I'm talking about. The Voice is rather obvious: above the midpoint is where most melody and leading sounds come from above the floor and body.

You'd think "mud" would come from the floor, but I find it usually comes from the body below the midpoint. When I'm cutting "mud" from the mix, it's between 100 and 500. Additionally, I find a lot of instruments that come up in DAW are already very bass heavy and muddy. Sixto Sounds used to make fun of me for this, but I also recommend cutting some bass out of the bass instruments themselves. As silly as that might sound, you just need to do it sometimes to achieve the right balance.

2. I never learned how to use Reverb properly and I still have trouble with it, but there are some solid fundamentals I can pass on.
    2a: Find the amount of reverb or general sound you want, then go straight to DRY. Then add WET (i.e. adding reverb amount back in) slightly and slowly until it sounds like you're achieving a balance that isn't too echoey or out of touch with everything else while always opting for it sounding too dry than too wet. I only had one college class for audio production and it wasn't even the right class, but my professor said something that (pun intended) resonates today: "You're supposed to make reverb sound like you're not using reverb at all". He was a goofy-ass guy, but this was spot-on.
    2b: Not all reverb plugins are equal. While there typically is no such thing as a bad reverb plugin so long as it provides any reverb at all, not all reverb plugins and sounds work for everyone. If you're getting too much mud and slush from your reverb stuff, you might need to go looking for a reverb that does sound like what you want.
    2c: For instrument channels that you're using reverb with, a delay effect should be on as well. The trick here is to put it on a very basic 3-step delay, then turn it down to like 3% or 5% where you can't hear the instrument bouncing off the sides (unless, of course, you want it to be bouncing echoes like many instruments in modern productions do). This is a trick to finalize wetness that I learned from Rozovian, who in turn learned it from bLiNd, IIRC. I use a delay effect on practically every channel except the main bus (main channel for all instruments).
    2d: Some reverb effect plugins let you EQ the reverb itself, but I find this effect is much more subtle than it might sound. If reverb is sounding too muddy or slushy, you need to fix it in the instrument channel, not the reverb plugin. The reverb plugin is just for minimal tweaking after the real work in the instrument channel.

3. In heavy, thick productions, the bass instruments should have their stereo separation reduced to straight up MONO and then very slightly adding some stereo back in, or not at all. Between the bass and drums, it's the drums that need stereo placement and width and keep them wide while focusing on a narrow bass will fit the floor correctly. I also add a compressor on the bass. I use FL Studio 11 and I just use the FL Studio Multiband Compressor on "Mastering 2.4db" preset, then style the EQ and mixing channel volume as needed. I read this bass tip in some random audio mixing book years ago at a Booksamillion and it has worked ever since.

4. Checking to see which instruments/frequencies are wrong during the song production is just a case of experience with successfully doing it. You won't really know what to fix until after you've already done it right a few dozen times. If I have trouble with this, I just mute and single out the instruments starting from the bass up (or wherever it sounds like the problem is) and automate volume or EQ as needed until it no longer makes me think something is wrong with it.
    4a: This deserves it's own line, but learning how to automate the volume levels and EQ frequencies in your DAW is essential. It's complicated and a pain in the ass to do, but it's easier than other hillbilly methods of fixing stuff. If I had learned to do that in my early days, I would saved myself dozens of song headaches.
    4b: Also remember that most instruments and frequencies are not supposed to have the same volume and intensity all throughout the song. Sometimes the bass is supposed to be pulled back, sometimes the piano or guitar have to become inaudible, etc. Sometimes being uneven is what makes it sound organic and makes it work.

And that's all I feel like typing. Enjoy.

Link to comment
Share on other sites

48 minutes ago, Meteo Xavier said:

^ Those points of "advice" are so general and devoid of details that they are completely useless. I have to wonder why you bothered writing anything at all.

Actual advice:

Keeping frequencies and things like that "even" is a persistent challenge in audio that is pretty much the entirety of mixing in the first place. By NOW I've been doing this long enough and have gotten good enough (especially compared to my first Ocremix here) that I can throw around some weight in authority on the subject, while still "hillbillying" my way through it since I still lack formal training beyond spending 15 years trying to get better obsessively. That is the pretext for me telling you what I do to achieve results:

1. EQ can be simplified here (in my own personal, unpatented and unofficial model) as three things: The Floor (the bass), the Body (the mid) and The Voice (the highs). The bass literally brings to mind a stage floor for the performance on the stage. The Body is the "stage presence" and gives the soundscape a kind of "muscle" to the overall sound. Take down a whole bunch of the mid in your EQ to hear what I'm talking about. The Voice is rather obvious: above the midpoint is where most melody and leading sounds come from above the floor and body.

You'd think "mud" would come from the floor, but I find it usually comes from the body below the midpoint. When I'm cutting "mud" from the mix, it's between 100 and 500. Additionally, I find a lot of instruments that come up in DAW are already very bass heavy and muddy. Sixto Sounds used to make fun of me for this, but I also recommend cutting some bass out of the bass instruments themselves. As silly as that might sound, you just need to do it sometimes to achieve the right balance.

2. I never learned how to use Reverb properly and I still have trouble with it, but there are some solid fundamentals I can pass on.
    2a: Find the amount of reverb or general sound you want, then go straight to DRY. Then add WET (i.e. adding reverb amount back in) slightly and slowly until it sounds like you're achieving a balance that isn't too echoey or out of touch with everything else while always opting for it sounding too dry than too wet. I only had one college class for audio production and it wasn't even the right class, but my professor said something that (pun intended) resonates today: "You're supposed to make reverb sound like you're not using reverb at all". He was a goofy-ass guy, but this was spot-on.
    2b: Not all reverb plugins are equal. While there typically is no such thing as a bad reverb plugin so long as it provides any reverb at all, not all reverb plugins and sounds work for everyone. If you're getting too much mud and slush from your reverb stuff, you might need to go looking for a reverb that does sound like what you want.
    2c: For instrument channels that you're using reverb with, a delay effect should be on as well. The trick here is to put it on a very basic 3-step delay, then turn it down to like 3% or 5% where you can't hear the instrument bouncing off the sides (unless, of course, you want it to be bouncing echoes like many instruments in modern productions do). This is a trick to finalize wetness that I learned from Rozovian, who in turn learned it from bLiNd, IIRC. I use a delay effect on practically every channel except the main bus (main channel for all instruments).
    2d: Some reverb effect plugins let you EQ the reverb itself, but I find this effect is much more subtle than it might sound. If reverb is sounding too muddy or slushy, you need to fix it in the instrument channel, not the reverb plugin. The reverb plugin is just for minimal tweaking after the real work in the instrument channel.

3. In heavy, thick productions, the bass instruments should have their stereo separation reduced to straight up MONO and then very slightly adding some stereo back in, or not at all. Between the bass and drums, it's the drums that need stereo placement and width and keep them wide while focusing on a narrow bass will fit the floor correctly. I also add a compressor on the bass. I use FL Studio 11 and I just use the FL Studio Multiband Compressor on "Mastering 2.4db" preset, then style the EQ and mixing channel volume as needed. I read this bass tip in some random audio mixing book years ago at a Booksamillion and it has worked ever since.

4. Checking to see which instruments/frequencies are wrong during the song production is just a case of experience with successfully doing it. You won't really know what to fix until after you've already done it right a few dozen times. If I have trouble with this, I just mute and single out the instruments starting from the bass up (or wherever it sounds like the problem is) and automate volume or EQ as needed until it no longer makes me think something is wrong with it.
    4a: This deserves it's own line, but learning how to automate the volume levels and EQ frequencies in your DAW is essential. It's complicated and a pain in the ass to do, but it's easier than other hillbilly methods of fixing stuff. If I had learned to do that in my early days, I would saved myself dozens of song headaches.
    4b: Also remember that most instruments and frequencies are not supposed to have the same volume and intensity all throughout the song. Sometimes the bass is supposed to be pulled back, sometimes the piano or guitar have to become inaudible, etc. Sometimes being uneven is what makes it sound organic and makes it work.

And that's all I feel like typing. Enjoy.

 

You deserve the biggest virtual hug for this lol. Your point about automating EQ and the like is something I haven't done personally yet. I got Gulfoss awhile back and I've been lazy about automation since then haha. I still have to do a lot of panning, volume adjustment, and EQ by myself but at the end when I have a mud monster, it is very helpful to have Gulfoss alleviate some of that or at least show me what is making things not sound like I want. Then I can go back and do some stuff.

 

About the bass mono thing. I dunno how exactly I can do that but usually if it's a heavy bass instrument or it's percussion I point those pans all the way to the middle haha. Idk if that makes them mono or not but I find that having those kind of instruments anywhere else often creates problems. This isnt cut and dry (especially with strings) but I do it a lot.

Link to comment
Share on other sites

Your DAW should have each instrument effects channel a set of things that include stereo width and the ability to turn it all the way to zero. Turning it to zero should be Mono, not just keeping them in the middle.

Failing that, Soundspot has a cheap plugin (or was when I bought it, not sure what it goes for now) called "Focus" that has a very easy Mono option on it. It's big, it's unambiguous, you just click it on and it's narrow and Mono. It's useful for some other general sound improvement as well, so I'd recommend getting it anyway.

Link to comment
Share on other sites

56 minutes ago, HarlemHeat360 said:

About the bass mono thing. I dunno how exactly I can do that but usually if it's a heavy bass instrument or it's percussion I point those pans all the way to the middle haha. Idk if that makes them mono or not but I find that having those kind of instruments anywhere else often creates problems. This isnt cut and dry (especially with strings) but I do it a lot.

Most DAWs have a control to control the width of a sound. In Reaper there are four different modes for the way the mixer handles panning aside from the pan law. The Stereo Pan mixer setting has a normal pan knob and a Width knob. Set the Width knob to 0 to mono a track for example. Failing that Voxengo offers their Mid-Side Encoder (MSED) plugin for free. Click the Side Mute button and the signal will now be mono.

 

Some further notes regarding reverb. Reverb is traditionally a send effect. In ye olden days this meant that if you wanted reverb applied externally you used an auxiliary on the mixer and sent that off to the reverb and returned it into the mixer. I mention this because it is an important factor to consider when you're mixing as a natural consequence of this limitation was a homogeneity in the way that reverb sounded. Another thing to keep in mind with reverb is to actually control the frequencies that are entering the reverb and/or exiting the reverb. If you go read up on mixing you will see that you're told not to send bass instruments into reverb. The reason has to do with keeping the mix's low end from getting too cluttered. But if you say add a HPF before the reverb? Well, now that simply is not a problem. The same applies to the high end of a reverb. While it may sound counter intuitive at first by using a simple HPF and LPF entering into the reverb you can exert even more control what is actually going on with how your reverb sounds.

Going further is not thinking of reverb as reverb, but instead thinking of as another pan knob. Except that instead of panning left & right; reverb pans forward or backward in the mix. The more reverb a sound has the further back it will sound and the less reverb it has the closer it will sound. By using the same reverb across the instruments in your mix you can then use reverb to place instruments in front or behind others creating the illusion of depth. Couple this with controlling the amount of the dry sound that goes into the master and you have complete control over the depth of the sound. This is one of the reasons that you see the option for Pre-Fader Listen (PFL) in your DAW.

In dense productions you may want to avoid using an actual plugin and instead opt for using a delay plugin instead. The reason being that at the end of the day reverb is really nothing more than a ton of diffused echoes. A properly configured delay plugin can emulate a reverb in the context of a full mix, and since it is a delay plugin you have considerably more control over the echoes themselves.

Turning over to the comment about cutting bass from bass instruments. This may in fact sound crazy, but it is very sound advice in certain situations. See, our brain lies to us. One strange oddity of how our hearing works is if you say remove the fundamental out of a bass sound but leave the rest of the harmonic structure the brain will quite literally replace that missing fundamental harmonic. This is because all the rest of those harmonics are there, and thus that fundamental harmonic must also be there. So, by removing bass you haven't effectively lost anything and you've just gained valuable headroom.

Link to comment
Share on other sites

@Rapidkirby3k

All right, now I have some more time to write a more full post regarding mixing and hopefully build off of what @Meteo Xavierhas laid as a foundational basis.

Equalization - There are charts out there for starting points when it comes to EQ, but they are nothing more than suggestions and at the end of the day its your ears that determine where the EQ ought to go. One thing that must be remembered regarding EQ and compression (I will get there as it has not yet been mentioned) is the idea of keeping the overall loudness the same when you are comparing the dry & wet versions of the sounds. As I said prior our brains lie to us. One such way that also lie to us is if a sound is louder it will sound better 99.9% of the time. So, to properly compare between a sound that has been EQ'd versus one that has not been EQ'd is to make sure that they are level matched. This will help you make more objective comparisons regarding what your EQ is actually doing.

Next, we are actually less sensitive to cuts than we are boosts when it comes to EQ. This means that you can do deeper and more drastic cuts than you can boosts. Something that you will likely find if you read up on mixing enough is something along the lines of "Cut narrow, boost wide." This is not universal by any means, but instead think of it as a suggestion as a starting place. A practical example is a snare drum. If you want the snare to some more bottom end, maybe start around 150Hz. Want more snap to the snare? Try starting around 5KHz.

You mention muddy mixes in particular. This is typically because there is too much low mid energy in the mix. This is loosely around 200Hz to around 700Hz. So, focus on what is actually going on in that region in particular to try and clean up some of the mud. One thing that I really want to stress is that music lives in the mids. If you can get the 100Hz to 6KHz or so sorted out you've accomplished the vast majority of the hard bit of mixing.

The last tidbit regarding frequency and equalization from me. There is something called the Equal-Loudness contour. Turns out that our hearing is far from linear and our hearing is most linear around 80dB SPL. What these show is how sensitive we are to certain frequencies. For example low frequencies require a lot more energy for us to detect than 2-4KHz. This most likely is a consequence of human speech occurring primarily in that 2-4KHz region.

Reverb - I think I did an okay job of explaining the gist of it in my prior post so I won't really reiterate it here.

Compression - While this was not mentioned by you and only in passing by Meteo Xavier, compression is an invaluable tool to learn how to use. Compression is the only tool in your toolbox that acts directly on the time domain of a signal. Want a big kick and bass? Well, the easy way is to use something called sidechain compression. Got a lot of big guitars and the vocals are just getting buried? Easy button is a little bit of sidechain compression. Is the vocal just a bit uneven in terms of short term dynamics? Compression. Want that bass to be just planted at the bottom? Compression. Those pads and strings just eating up a bit too much of your available headroom? Compression.

So, what is this compression exactly? Well, it turns out that being able to manipulate the volume of a signal automatically not only saves a lot of time because you don't have to automate as much, but it turns out that it can do things like make a sound a bit more consistent in its overall dynamic range. Now, I could go on for honestly hours about different types of compression and such. However, that really doesn't explain what it is that a compressor does or how to actually use a compressor. So, here is my little spiel about a basic compressor.

There are two different topologies of compressors and four primary controls to compressors. For the topologies there is what are called Feedback and Feedforward compressors. The important distinction here is where the brains aka the detector of the compressor is fed from. In a Feedback compressor the signal feeding the detector comes after the gain control element (this is the thing that determines the type of compressor like a JFET, VCA, Opto, etc...). In a Feedforward design the signal feeding the detector is more or less the same signal that enters into the gain control element. This is important to keep in mind because it does influence the way the compressor sounds and the way the compressor behaves. In a feedback design the control element takes longer to react because the signal has to go through it before the compressor can do anything about it. In a feedfoward design this is not the case. Some compressors allow you to choose between feedback and feedforward. My suggestion is to simply switch between the two and see which one you like more. It is as simple as that. There are many great and famous compressors in both camps. Pick whichever one does what you want for the sound you're working with.

Now, the important controls in a compressor. Threshold, Ratio, Attack, and Release. All the threshold does is determine at what level will the compressor even start reacting to the incoming signal. So, if you've got say a signal that never goes above -10dBFS, and you've got that threshold at -6dBFS. Well, then the compressor will simply never react to the signal even if it had an infinitely fast attack because it never exceeds the threshold. Now, this isn't strictly true, but that is because it has to do with the Knee of the compressor, but I will come to that when talking about the ratio.

The next control is the Attack. What the Attack determines is how long the compressor must wait after the Threshold has been exceeded before it will begin compressing. So, if the attack is say 1 second. Then the compressor will only begin to compress if the Threshold has been exceeded for 1 second or more.

The Release determines how long the compressor will wait after the incoming signal has fallen below the Threshold before the compressor will actually wait to stop compressing. Going with a 1 second release then the compressor will only begin to stop compressing once the signal has fallen below the threshold for 1 second or more.

Now, comes the fun part the Ratio. The ratio determines exactly how much the compressor will actually compress a given signal. This is easier to explain with an example. If the ratio is say 2:1 then for every 2dB that the signal exceeds the Threshold then only 1dB will come out of the compressor. That is all there is to the ratio of a compressor.

I did mention something called the Knee, and my explanation of the controls assumed what is called a Hard Knee. What a Hard Knee means is that the compressor will only start reacting to the signal once the Threshold has been exceeded. Sometimes though it is desirable to have a compressor that starts to compress a little bit before the actual threshold is reached. This generally results in a smoother compression action. This is called a Soft Knee. And it is typically done in dB. Assuming a Threshold of -10dBFS and a Knee of 3dB means that the compressor will actually start reacting to signals that are at -13dBFS, but at a reduced ratio. The ratio of the compressor will increase in tandem with the signal level until the Threshold is reached at which point the compressor will simply use whatever ratio that is dialed in. Something like Fabfilter's Pro-C or even the Fruity Limiter actually do a good job showing how the Knee, Ratio, and Threshold interact visually.

Armed with that basic primer you can actually start to really experiment with what a compressor will do to a signal.

Sidechain compression now is a cool technique. Essentially, all you are doing is hijacking the brains of the compressor and feeding in a signal of your choosing. Put another way. You're going to be compressing one sound with another. The most common technique these days is that big pump you hear in EDM. In that case the bass, pads, or what have you are being compressed by the kick. This turns down the volume of all of those sounds whenever the kick hits and creates that pumping sound.

I could go on further, but I think between myself and Meteo Xavier you have a good primer to start building your skillset. Most importantly, try different things and find things that work for you. And lastly practice. Mixing is a skill and needs to exercised :)

Link to comment
Share on other sites

Grab yourself a well-mixed track in roughly the style you're going for, and aim to mix similarly. Compare. Especially check that your lows aren't too loud, as they'll eat up a lot of headroom, and that your leads are bright enough, ie have enough high frequency content. It's okay that yours isn't as loud, mixing is more important than loudness, and loudness is easier when things are well mixed. Turn down your reference track accordingly.

You get used to the sound of your track as you're working on it, so make a habit of using a reference track to reset your ears so you can tell when you have too much reverb, too much compression, too much bass, too much of anything.

You can also check your mix by turning down the listening volume and mixing on super-quiet, because then you'll only hear what's most important in the mix. Use this to check that background things stay in the background and the important things are the loudest. Compare with your reference track like this too.

Use decent mixing headphones.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...