Jump to content

Cleaning up the mix - with single track EQ, master track EQ, EQed aux effect sends, smart panning decisions and other methods


 Share

Recommended Posts

Today, I had a listen to some older and newer mixes of mine on the rather ordinary consumer hi-fi system at my mom's home when I was setting up a shelf at her place.

Of course the newest mixes I'm currently working on sounded way better than the older mixes - but even there I could hear a very small amount of unwanted low-end clutter which I didn't recognize when mixing all the stuff with my Yamaha MSP 3 studio monitors (frequency range from 65 Hz to 22000 Hz) and my Fostex subwoofer (adds lower frequency representation to around 40 Hz) and my Beyerdynamic DT 880 Pro headphones (with its comprehensively represented frequency range from around 5 to 35000 Hz).

I could also hear a bit low-end mud at soundtracks from professional bands known all over the world.
So I thought, it could rather have to do with the hi-fi system itself and its own (raised but less defined) bass representation.

But I'm still not sure since the soundtracks from famous bands - especially those from the 80s - mostly sounded like they were still on a higher level of perfection at music production.
And I think it's mostly because of the cleaner low-end.

Up until this point, I've always separated the other instruments from the bass and drums in the mix with various individual low-cut filter settings, while often allowing bass and/or drums to pass relatively freely down frequency.
But with the time I think more and more about cutting even the deeper low-end frequencies of instruments like bass and drums, which usually unchallenged cover the lowest frequency ranges in the whole mix, with a very steep low-cut filter below the frequency ranges of around 20 Hz to remove such almost inaudible low-end jumble (which one usually still perceives on rather bass-emphasized hi-fi systems or even some kitchen radios) especially for the playback on less good playback devices in maximum way.

I'm already thinking about doing this directly with a master EQ plugin, just because it might save a lot of time and you don't have to care about phase issues (in marked contrast to using steep-edged low-cut filters at standard tracks which share a similar frequency range - if my information about phase problems at this point is correct so far).

...

In this case, I could leave out the steep-edged EQ low-cut filter for the low-end frequencies at the bass and the drum elements completely (the softer, less aggressive low-cut filters of the other instruments for a better separation of frequencies between different instruments should still do their common job, of course) - and the steep-edged EQ low-cut filter in the master EQ plugin would do the whole job instead, without causing any phase issues.

So, in the EQ plugin interface it might look like this:

EQLowCutMasterTrack.PNG.fbfac8ce5746a4b09ef83421e489d4b1.PNG

With the option Modus (mode), I could also set the phase line (green) from Normal (normal) to Linearphasig (linear-phase) - so, it would shift the phase of the whole frequency range and not just the phase of cutted lower frequency range.
But it should not make a really big difference at the master track at all, except of a few (not even perceptible) milliseconds in the latency of (parts of) the playback maybe.

I rather ask myself what the value of 18 Hz in the frequency field is supposed to mean in this case (the drop in the frequency range caused by the steep-edged low-cut filter with 36 dB per octave already seems to begin slightly at around 70 Hz - at 18 Hz the drop is around 10 dB and at 10 Hz the drop is already around 27 dB) - but maybe it's rather useful for EQ peak adjustments than for low-cut filter settings.

...

But what do you think...

1) Is it useful to filter out the bottom of the low-end frequency spectrum completely?
2) If yes, at which frequency you would start to radically filter out the bottom end (with at least a drop of 10 dB) - rather at 40 Hz, 30 Hz or 20 Hz?
3) Would you do it directly with a master EQ plugin like in my example?
4) Or would you be afraid of loosing some crucial audio information (maybe some playing noises or reverb/delay effects of a contrabass, an electric bass, a kick drum or a lower key from a concert grand)?
5) What are your favourite methods of cleaning up the bottom end, the (lower) mids and the other frequencies in the mix (or the mix as a whole)?
6) How can you use the panning of your instruments and sound signals in order to clean up your mix?

...

PS: Since the topic became more extensive and more useful than expected, I changed the title from "Steep-edged low-cut filter on the master track as a general solution for solving low-end clutter issues in the mix?" to "Cleaning up the mix - with single track EQ, master track EQ, EQed aux effect sends, smart panning decisions and other methods".

Edited by Master Mi
Link to comment
Share on other sites

It isn't a general solution, but I still do it on the Master Track because I will never hear or feel below 25 Hz anyway. 20 Hz is too little impact to pick. It will cut out room noise on vocal recordings, and excess low end on poorly-recorded samples.

But low-end clutter to me describes low-mid-range as well, which this does not solve. You simply have to solo each instrument that is occupying that range and decide what you want to be prominent, then scoop the others at that range (140 - 300 Hz or so, above kick drums and below regular midrange).

Edited by timaeus222
Link to comment
Share on other sites

  • 2 weeks later...

If you haven't watched all of Dan Worrall's stuff, I highly recommend it: https://www.youtube.com/@DanWorrall.  He also does tutorials for FabFilter's stuff, but in a general way such that it can apply to everything ("Introduction to..." series).  Here's a few tips I've picked up on:

-Reduce the stereo width for your kick channel to mono or near-mono, and consider it for the bass (maybe don't go all the way to mono for bass)

-As far as I know, it's generally a good strategy to HPF each track up until it starts affecting the sound.  The low frequencies stack and build, especially after reverb + effects, after combining multiple tracks, etc.

-On the master, cut all the sub-bass with a HPF @ 20 Hz - you won't hear it anyway.  Consider setting it a bit higher depending on your mix - for music you don't always want the 'rumble' of low bass frequencies.

-Definitely try a HPF @ ~90Hz for just the side channel since our ears generally aren't sensitive to lower frequencies on the sides.  Depending on the specific content of your mix, this may change your stereo image for 90Hz and below (due to phase differences in the filter's slopes and target Hz) - if it sounds worse, play with the slopes of the 20Hz HPF and the 90Hz HPF or use a linear phase EQ for this specific HPF.

-Try HPF'ing your input to your reverb so that lower frequencies aren't 'verbed (probably want a shallower slope for the filter here).  Or at least ducking the wet low frequencies on a new note.

On 10/30/2023 at 6:20 PM, Master Mi said:

I'm already thinking about doing this directly with a master EQ plugin, just because it might save a lot of time and you don't have to care about phase issues (in marked contrast to using steep-edged low-cut filters at standard tracks which share a similar frequency range - if my information about phase problems at this point is correct so far).

Why care about the phase so much - are your filters not clean (e.g. there's a resonant bump at the target frequency)?  Your ears only pick up on the relative phase differences e.g. if you have different phases at a frequency on the left channel vs right channel or mid channel vs side channel (you can give some width to a mono track by using e.g. a 12dB/oct HPF on L channel but a 24 db/oct HPF on R channel at the same frequency).  But you should be able to use steep filters on individual tracks without issue (as far as phase is concerned). Phase can be a problem if the stars align and a bunch of individual waveform peaks line up, but that would very likely only be an issue for that individual note and not overall.

Edited by Sengin
Link to comment
Share on other sites

  • 3 weeks later...
On 11/15/2023 at 12:12 PM, Sengin said:

-Reduce the stereo width for your kick channel to mono or near-mono, and consider it for the bass (maybe don't go all the way to mono for bass)


I do it in a similar way.

I keep the bass in mono or very close to mono, opening it up around 1 to 5% towards full stereo - that way the bass sounds less stiff, static and monotone, and it has at least some room to move, especially if I'm using a really dry bass.

And for acoustic drums, I often use a greater stereo width of between around 20 and 50%.

 

On 11/15/2023 at 12:12 PM, Sengin said:

-On the master, cut all the sub-bass with a HPF @ 20 Hz - you won't hear it anyway.  Consider setting it a bit higher depending on your mix - for music you don't always want the 'rumble' of low bass frequencies.


That's the big question.

Guess I'm still afraid of losing some low-end audio information instead of embracing high-end sound quality in my mixes.
But when listening to some soundtracks from the 50s, it calms me down a little bit, because in the soundtracks back then wasn't too much low-end stuff at all:
 


It might have to do much more with the recording technology and the consumer audio playback devices of those days.
But even if you listen to the soundtracks of the 50s with today's audio equipment, they still sound really fresh, clean, well-mixed, highly dynamic, very controlled in the low and mid ranges and still pretty complete in the frequency spectrum.

...

In my remix of the Baywatch opening, for example, I really wanted some mighty low-end rumble in the industrial percussion section at the beginning and towards the end of the soundtrack.
Unfortunately, the latest remix version of this track that I uploaded many years ago, was still mixed on my old studio monitors (where I couldn't really hear or evaluate what was going on in the low-end and low-mids sections) without the application of my new mixing concept I've especially developed over the last two years:
 


I've already started working on an improved mix in a way that meets my current standards and mixing skills, where I even use a low-cut filter on the mighty industrial percussion and other instruments in order to clean up the low-end and mids sections in the track.
But it might take a while, because I also want to enhance the piano composition in this track and I'm mainly working on 2 or 3 other music projects at the moment.

At the moment, I might be lucky and be able to continue working on my mixing and composing projects during the week between winter service assignments and some work on the construction sites.

 

On 11/15/2023 at 12:12 PM, Sengin said:

-Definitely try a HPF @ ~90Hz for just the side channel since our ears generally aren't sensitive to lower frequencies on the sides.


Yeah, lower frequencies like from bass and kick drum often sound most effective and powerful in the center of the stereo panorama.

But there are also interesting exceptions to this rule concept.
Just have a look on the famous soundtrack "Stand by Me" by Ben E. King:
 


In this track, you have the bass far on the left side, together with a shaker and triangle as a nice contrast in the higher frequency section.
On the right side of the stereo panorama in this track, you have a lower strings section, a humming choir and a higher strings section with a violin (if I hear it correctly) as a contrast in the higher frequency section again.
But the center of the stereo panorma seems to be mainly reserved for the singer voice in this case.

Kinda unsual mixing concept with the bass panned to the side - but pretty effective in this track.

 

On 11/15/2023 at 12:12 PM, Sengin said:

-Try HPF'ing your input to your reverb so that lower frequencies aren't 'verbed (probably want a shallower slope for the filter here).  Or at least ducking the wet low frequencies on a new note.


I guess you're talking about working with aux sends for VST plugin effects such as reverb, which allow you to process the plugin effect separately from the main signal (e.g. EQing only the reverb without affecting the main signal of the instrument, synthesizer, voice, etc.) - in contrast to working with direct VST-based plugin inserts, where entire signal chains including the source signal get processed.

The thing is, I'm really used to work with direct plugin inserts or integrated effects of the VST instrument itself (where I also like the much better, faster and more accurate regulation of several settings based on much more conceivable parameters and values) 'cause I never really got aux sends to work properly in my DAW.
Whenever I tried to create an aux send and turned it on in a track where I wanted to use that effect, the DSP (the internal digital signal processor of my DAW) would suddenly go over 100% and cause huge instabilities, nasty sound artefacts or even crash-like dropouts.
This was very unusual, because with direct plugin inserts the DSP barely reaches the 50% performance mark of my DSP even in my biggest, most complex music projects - and I usually work with raw, unbounced/unfrozen MIDI tracks (needs much more DSP/CPU performance, but it's totally uncomplicated to change something in the composition while listening, mixing and editing the track).

Until some weeks before, I have never really found out what was causing this issue - and I own a really good computer with an Intel i7 6700 processor system, 32 GB of DDR-4 RAM, a decent UR44 audio interface, top-notch DAW named Samplitude Pro X4 Suite, and more than enough free disk space.

But then I have found out that I messed up with one single setting in my DAW - the number of processor cores that should be used to process my DAW tasks.
I had set 8 cores in my DAW because I thought that my i7 6700 processor system really had 8 cores - but it only has 4 cores (which I may have confused with the 8 threads).

After changing the setting to 4 cores, I was finally able to use my first aux sends for separately processed effects plugins with smooth DSP performance and no futher issues in my DAW.

I could have also used my Origami convolution reverb from my Independence Pro FX plugin library in Samplitude.
This Origami reverb plugin also includes a 4-band parametric EQ that just affects the reverb - unfortunately, it only comes with a low shelf filter, two band-pass filters and a high shelf filter without a clear graphic display instead of providing a nice low-cut filter, several peak filters and a high-cut filter with a clear graphical interface).

It looks like this:

Independence-OrigamiConvolutionReverb.PNG.9229eb760b0844b3fc78e859619a94df.PNG


But with the finally functioning possibility of working with plugin-based aux effects sends, I may be able to enhance the sound quality of my mixing concept even further.
I probably won't use EQed aux sends on the main instruments in the upper frequency range (if the frequency of an instrument including reverb there might clash with the frequency of another instrument including reverb, it could make sense to EQ the whole signal chain directly in order to get a cleaner mix, or - if just the reverb is the problem - drastically reduce the reverb or replace the reverb with some nice ping-pong delay effects).
But for the instruments with the lowest frequencies in the track - like bass and bass-heavy drum elements with stronger reverb - that don't have to compete with other instruments from even lower frequency ranges, it could really be useful to filter out just the long-reverberating low-end reverb clouds (which often sounds like dull, undefined sound mud on ordinary consumer speaker systems) from the mix, while maintaining the power and assertiveness from the main signals of the bass and lower drum elements.



Since I currently work on a new mix (based on my new mixing concept) for my Crisis Core: Final Fantasy 7 remix called "Wings Of Freedom", I could try out a few things and provide you with sound clips from different mixing approaches - especially the old version, the new version (based on my new mixing concept), the new version with an additional master low-cut filter, and the new version with an additional master low-cut filter plus some aux reverb sends with low-cut filter for crucial instruments.

As long as winter doesn't give me its legendary white-out ultra finisher with unexpected masses of snow these days (as I have already mentioned, I also work in winter maintenance during the cold season), I'll upload a few audio samples for you soon. ))

Link to comment
Share on other sites

  • 2 weeks later...
On 12/5/2023 at 9:23 AM, Master Mi said:

I guess you're talking about working with aux sends for VST plugin effects such as reverb

I'm talking about inserts.  Instead of sending your main signal through the reverb, split the signal (e.g. in Reason you'd use the spider audio merger/splitter, but in every DAW it's different) and send only one through to the reverb.  But before it hits the reverb, send it through an EQ to e.g. roll off the lows.  Then merge it (the EQ'd + reverb'd signal) with your main signal.

Link to comment
Share on other sites

On 12/18/2023 at 11:17 AM, Sengin said:

I'm talking about inserts.  Instead of sending your main signal through the reverb, split the signal (e.g. in Reason you'd use the spider audio merger/splitter, but in every DAW it's different) and send only one through to the reverb.  But before it hits the reverb, send it through an EQ to e.g. roll off the lows.  Then merge it (the EQ'd + reverb'd signal) with your main signal.

If I understand it correctly, it's more or less a change of the order of the plugin insert signal chain.
Usually you use the reverb first on the signal and then the EQ on the signal + reverb - but in your case it's first the EQ that hits the signal, and after this, the EQed signal gets the reverb.

Depending on the order of the plugin inserts, the sound result will be a different one.

...

But it could be really useful if the DAW developers create a system with primary plugin slots (maybe 7 per track) and secondary support plugin inserts (maybe 3 per primary plugin slot).
So, you could put a reverb in the primary plugin slot and an EQ in the connected secondary plugin slot that will only affect the plugin effect in the primary slot (and not the source signal itself).

Since I usually treat each instrument/track individually, I'd prefer a system like this over creating several aux send tracks for each instrument track.
But I guess I will use EQed reverb aux sends only for critical instrument tracks like drums, bass and instruments with lots of low-end and low mids.

For the instruments in the frequency ranges above, I will certainly continue to EQ the complete sum signal (source signal + reverb).

...

Got the 4 audio samples almost done in between work and weekends full of sprawling Christmas preparations.

Just gimme a few more days (already working on the 4th one, where I still try to find out which further instruments besides drums, viola, acoustic guitar, and maybe also the rather dry bass in the mix would benefit by using EQed aux reverb sends on them - and especially how much EQ/low-cut filter on the reverb effect is optimal to clean up the mix without destroying its ambience).

Edited by Master Mi
Link to comment
Share on other sites

On 12/19/2023 at 1:49 PM, Master Mi said:

it's more or less a change of the order of the plugin insert signal chain

Not quite - there's no "in place of" here.  You can do this approach (only reverb a HPF'd signal) and then EQ afterwards too (and you'll probably want to, or at least EQ only the wet signal).  The intent of the approach I mention is to reduce the part of the signal that gets reverbed because of how reverb tends to muddy out the lows.  Of course, you are free to EQ afterwards instead, or only EQ only the wet signal, or some combination, and the result won't be the same.  There's no one-size-fits-all approach - each will sound better in different situations (different genre, different reverb plugins) - it's up to you to try multiple approaches and decide on which is best for the song in that spot.  The more approaches you have, the more likely you will find the perfect fit.

Another similar approach is to use "de-emphasis EQ" - EQ is (I forget the mathematical term [edit: the term is "linear"]) 'non destructible' and reversible - a 6dB cut at 150Hz then a 6dB boost at 150Hz leaves you exactly where you were.  This means you can e.g. cut, reverb, then boost, and "de-emphasize" the lows that get reverbed.  Same works for e.g. distortion - you can emphasize fun frequencies by boosting, adding distortion, then cutting. 

 

On 12/19/2023 at 1:49 PM, Master Mi said:

I will certainly continue to EQ the complete sum signal (source signal + reverb)

Why restrict yourself?  I don't think this is such a black-and-white situation where EQ'ing the input+wet signals identically is always preferred.  I perhaps even find it beneficial to usually assume I will need to EQ the wet signal individually - this lets me keep a clean initial hit, but then ducking out the verb can make space for other tracks (especially in the all-important mids, or to reduce hissing/esses).  Then of course I am free to EQ the input+wet signal together if needed.

Personal preferences and all that, but I find myself usually staying away from aux sends as each track is its own thing and usually needs custom tailoring.  With sends, the only thing you can change on a per-track basis is the volume of the send.

Edited by Sengin
Link to comment
Share on other sites

On 12/18/2023 at 5:17 AM, Sengin said:

I'm talking about inserts.  Instead of sending your main signal through the reverb, split the signal (e.g. in Reason you'd use the spider audio merger/splitter, but in every DAW it's different) and send only one through to the reverb.  But before it hits the reverb, send it through an EQ to e.g. roll off the lows.  Then merge it (the EQ'd + reverb'd signal) with your main signal.

In Reason you have the side chain filtering as well in the mixer.  I really don't use this as often as I should, I usually do a parallel track for reverb and EQ the reverb (and then sidechain the main onto the parallel track if necessary), or I just use a reverb vst that already has ducking and EQ already built in.

Link to comment
Share on other sites

On 12/20/2023 at 4:57 AM, Sengin said:

Why restrict yourself?  I don't think this is such a black-and-white situation where EQ'ing the input+wet signals identically is always preferred.  I perhaps even find it beneficial to usually assume I will need to EQ the wet signal individually - this lets me keep a clean initial hit, but then ducking out the verb can make space for other tracks (especially in the all-important mids, or to reduce hissing/esses).  Then of course I am free to EQ the input+wet signal together if needed.

Personal preferences and all that, but I find myself usually staying away from aux sends as each track is its own thing and usually needs custom tailoring.  With sends, the only thing you can change on a per-track basis is the volume of the send.

Nah, I'll definitely use separately EQed aux reverb sends - but not for every instrument...

I think I will handle it just as I wrote before:

On 12/19/2023 at 10:49 PM, Master Mi said:

But I guess I will use EQed reverb aux sends only for critical instrument tracks like drums, bass and instruments with lots of low-end and low mids.

For the instruments in the frequency ranges above, I will certainly continue to EQ the complete sum signal (source signal + reverb).

The main reason for EQing the entire sum signal for the higher frequency instruments is firstly the fact that the instruments and sounds in the higher frequency range are often much more competitive (... so you may need to cut more frequencies in generel to clean up the track - then why not cutting straightly the entire sum signal?), and secondly the fact that reverb in the higher frequency ranges doesn't cause much of a problem for human ears (higher frequency reverb doesn't blur the soundtrack like low frequency reverb does).

...

Besides...

After the last week of work with the crappiest weather conditions (lots of rain, mud and almost a storm) on the building site, many hours of Christmas preparations, a merciless workout, the final cleaning of my cozy palace, a somehow relaxed and interesting Christmas Eve (as a big surprise my uncle visited my mother and talked about his trip to Japan and his experiences with the Japanese culture and the people there) and also a lot of boring small talk (I even took a big break from further family events and could finally enjoy working on my music projects), I managed to finish the 4 audio samples.

Maybe I'll already upload them in the next few hours or tomorrow morning. ))

Link to comment
Share on other sites

8 hours ago, Master Mi said:

so you may need to cut more frequencies in generel to clean up the track - then why not cutting straightly the entire sum signal?

Because of intermodulation (how the amplitude of a frequency determines how much of an effect is applied, and how that frequency's amplitude affects other frequencies).  It is not the same to cut and then reverb as it is to reverb and then cut - that is, reverb is not a linear operation.  If you are being wary of a specific frequency region because it can be crowded, if you cut first the reverb may sound more natural than if you cut afterwards (where it may sound like something is missing or "off" because the reverb was applied to a different signal at this point).

That said, I'm just giving you options.  Doing it one way over another is not always better - it depends on the mix and the sound you are going for.  I'm just letting you know there is a difference in cutting before a reverb and cutting after and why it is different.

 

On 12/24/2023 at 9:58 AM, Xaleph said:

parallel track

Ah yep, you are right - forgot about that way.

Edited by Sengin
Link to comment
Share on other sites

10 hours ago, Sengin said:

Because of intermodulation (how the amplitude of a frequency determines how much of an effect is applied, and how that frequency's amplitude affects other frequencies).  It is not the same to cut and then reverb as it is to reverb and then cut - that is, reverb is not a linear operation.  If you are being wary of a specific frequency region because it can be crowded, if you cut first the reverb may sound more natural than if you cut afterwards (where it may sound like something is missing or "off" because the reverb was applied to a different signal at this point).

That said, I'm just giving you options.  Doing it one way over another is not always better - it depends on the mix and the sound you are going for.  I'm just letting you know there is a difference in cutting before a reverb and cutting after and why it is different.

Ah, I guess we were talking about two different things in this case.

You are talking about the direct insert plugin effect order:
Y1) EQ before reverb... will make a different sound result (maybe even cleaner as well - also might save some processing power of the CPU or internal DSP of the DAW) than...
Y2) reverb before EQ....

So I finally get what YOU are talking about - and thanks for the reminder at this point, because (if I remember correctly) I usually took the Y2 route.
This might have to do with my work habits when composing, arranging and mixing, where I often take a suitable instrument, then try to fit it into the ambience of my imagination with reverb, delay, chorus and other stereo/pan/room effects, and often do the fine mixing with the EQ stuff last.
That's probably the main reason for my plug-in insert order with the EQ at the end.


So if the source signal is "A", the EQ is "B" and the reverb is "C", the two ways of signal processing would result in different equations...

I'm neither a math geek nor a signal chain processing expert, but in this case the equations of processing the stuff for the two ways could be kinda close to my following creations of equations (guess it's still not the best and most accurate way to transribe the signal processing chains into an abstract terms - but perhaps it is enough for a rough imagination of the different results in the two different versions of processing the signal):

Y1 = sound result 1 (EQ before reverb)
Y2 = sound result 2 (reverb before EQ)

A = source signal
B = EQ
C = reverb

Y1 = AB + C*(AB) = AB + ABC = A (B + BC)
Y2 = AC + B*(AC) = AC + ABC = A (C + BC)

Let's take numeric values instead of the variables, something like: A = 2, B = 3, C = 5

Y1 = 2*3 + 5*(2*3)
Y2 = 2*5 + 3*(2*5)

Y1 = 36
Y2 = 40

Different numbers, different sound results on both ways.

"quod erat demonstrandum" :D

(Dude, I really hope I won't radically fool and disgrace myself with the math stuff here - if a math wizard 'n' tech sage reads this, feel free to correct, improve and transcend my light-footed pigeon-level equations.)


So, this was about the stuff you were talking about.

...

But I was talking about a different thing when I wrote:
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

"But I guess I will use EQed reverb aux sends only for critical instrument tracks like drums, bass and instruments with lots of low-end and low mids.
>>> only aux effect reverb sends

For the instruments in the frequency ranges above, I will certainly continue to EQ the complete sum signal (source signal + reverb).
>>> only direct plugin inserts for the instrument track (I didn't have the plugin order in mind here when writing about processing the sum signal - I could have also written "I will certainly continue to put a reverb plugin insert after the sum signal (source signal + EQ)" instead - my main focus here was just about using the plugins as direct plugin inserts in the instrument tracks.)

The main reason for EQing the entire sum signal for the higher frequency instruments is firstly the fact that the instruments and sounds in the higher frequency range are often much more competitive (... so you may need to cut more frequencies in generel to clean up the track - then why not cutting straightly the entire sum signal?), and secondly the fact that reverb in the higher frequency ranges doesn't cause much of a problem for human ears (higher frequency reverb doesn't blur the soundtrack like low frequency reverb does)."
>>> In this case, direct plugin inserts for these instrument tracks would save a lot of time compared to creating additional aux effect send tracks for each individual instrument track (unless you want to work with entire instrument groups where you use one aux effect send for the entire group).
-------------------------------------------------------------------------------------------------------------------

...

I don't know how much knowledge and experience you have with aux effect sends.

But when you work with aux effect sends, you really have 2 different tracks there - the instrument track (let's say track number 1) and another aux effect send track (could be track number 40, for example)...
Both tracks (and this is the great feature of working with aux effect sends) can be processed completely differently - different pannings, different inserts etc.
You just need to activate the aux effect send in the aux slots of your instrument track to route the effect to the instrument (otherwise the aux effect send track doesn't "know", to which instrument track it should put the aux send reverb effect or other effects) - and of course you need to set the relative ratio or intensity/level strength (in dB) of the aux track in relation to the level strength of the instrument track directly in the aux slot of the instrument track (if the level strength ratio between both tracks is set, the routed aux effect sends will get louder or quieter as the instrument track gets louder or quieter).

And here comes the big one - for the case you want to radically clean up your mix (especially the blurring reverb) without losing the original character and frequency range of your instrument.

Just as an example...
You're composing a complex ambient soundtrack with an acoustic guitar that has a really cozy, warm tone (you definitely want to keep the full frequency spectrum of this particular instrument) - but as soon as you apply reverb to this instrument, it completely messes up the other instruments (bass, drums and a few other instruments that play in the lower frequency range).

So, dry acoustic guitar sounds great in the mix - acoustic guitar with reverb makes the mix messy and muddy.
A problem which could be solved with the magic of aux effect sends.

Remember?
Two completely different tracks - the instrument track (which we will leave as it is - without any plug-in inserts in this case) and the other aux effect send track (into which we will insert the reverb and EQ only this reverb).

We want to keep the full frequency range, warm tone and clean sound of the guitar in the mix - so, we don't use any EQ plugin insert oder reverb insert on this instrument track.
Yep - sounds warm and clean, but still dry as hell.

So we still need some reverb (but a radically cleaned up reverb without the problematic low-frequency reverberation) for the ambience.
And of course we'll only put the reverb plugin in the plugin slots of the separate aux effect send track, because if we EQ the reverb in the separate aux effect send track, it won't affect the source signal of the instrument (two separate signal chains - one for the instrument track, one for the aux effect send track).

It is not of primary importance here whether the reverb is switched before the EQ in the signal chain of the aux effect send track or whether the EQ is processed before the reverb.
The important and really helpful feature here is that you can EQ only the reverb for the instrument without EQing/touching or changing the instrument itself.

So...
If you do it wisely, you can get a instrument with its full frequency range and original sound character together with a decently low-cut-filtered reverb by using aux effect sends.

In our case, we can have a nice, warm and cozy acoustic guitar with an untouched frequency range in combination with an ambient but clean guitar reverb, where just the low frequencies of the separately processed guitar reverb have been heavily low-cut-filtered.
And as a result, the guitar reverb shines much brighter and won't mess up the mix anymore.

...

I hope that I was able to make it a little clearer what I was referring to in my previous comments.

Edited by Master Mi
Link to comment
Share on other sites

  • Master Mi changed the title to Cleaning up the low-end und low-mid sections in a mix - with single track EQ, master track EQ, EQed aux effect sends and other methods

PS: Since the topic became more extensive and more useful than expected, I changed the title from "Steep-edged low-cut filter on the master track as a general solution for solving low-end clutter issues in the mix?" to "Cleaning up the low-end und low-mid sections in a mix - with single track EQ, master track EQ, EQed aux effect sends and other methods".

...

And since the upload feature below the comment field is working again (big thanks to DarkeSword at this point for restoring the upload fuction), I can provide some audio samples to show different mixing approaches or possible stages of mixing in my next comment.

Link to comment
Share on other sites

Some first steps of improving the mixing quality - using EQ filters and using aux effect/reverb sends instead of direct effect/reverb plugin inserts
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

So, finally I can start with the promised audio samples of my Crisis Core: Final Fantasy 7 remix at 4 different mixing approaches or mixing stages (only excerpts, but in which you can hear some crucial parts regarding the mixing quality - the final remix might still get some changes in the composition)...

Here they go:


1) The old version
-----------------------

 

 

This version is about 6 to 7 years old and was mixed on my former studio monitors (Presonus Eris E3.5 studio reference speakers in combination with my still existing Fostex PM-SUBmini 2 subwoofer) and on my Beyerdynamic DT880 Pro studio headphones connected to my Steinberg UR44 audio interface.

I didn't have the biggest mixing experience back then and didn't have as fine an ear as I do now (which may have something to do with the fact that my studio monitors in particular at the time couldn't show me the necessary details and the critical things in the mix - they had a very pleasant and powerful sound, but they rather varnished my tracks and made them sound kind of finished at a still unfinished stage).

I didn't even use dedicated EQ VST plugins back then (just a couple of 3-way vintage EQs and timbre controls in the VSTi and synth interfaces) because I thought that EQing (especially if you overdo it) turns the natural sound of an acoustic instrument into something very artificial and that the more EQ you use, the more your hearing constantly adapts to the new timbre until you can't tell when the bass is getting too heavy and trebles are getting too shrill - and so on.

I was thinking more about how it is still possible for so many different instruments to play together with the whole room reverb in large concert halls without the overall listening impression sounding muddy (I just wanted to understand this concept on a fundamental level and try to implement it in my DAW).

I rather suspected that it might be due to the well-placed positioning of the instruments or the size and spaciousness of the concert hall for a more relaxed expansion of the audio waves or acoustic signals, or that it might also be related to certain differences between real acoustic, analog sound signals and digital sound signals.

But back then I didn't even get to the kinda obvious thought that also great concert halls often get an enormous acoustic treatment on a highly professional level.
And if you talk about large bass traps, acoustic panels for the walls, floor and ceiling, or even about special materials for the seating, you could also say that these acoustic tools work in a similar way like a low-cut filter of EQ VST plugin, for example (only with the big difference that with a good EQ plugin you can react much better, much faster and without huge expenditure to changes in the soundscape).

...

But despite my hesitant moves at mixing, I still had the base of a mixing concept back than - a concept which should bring at least some great possibilities for dynamics, untouched/undestroyed signal peaks for better sound quality and a natural sound of the instruments, and a concept for sophicated loudness regulation of soundtracks.

I'm talking about mixing at EBU R 128 standards (a recommendation of the European Broadcasting Union) developed by really farsighted audio engineers, who wanted to bring back lots of dynamic, sound quality and a very good loudness regulation to audio and audiovisual content (no more annoying loudness jumps between different soundtracks and other audio programs) at broadcasting as a reaction to the ongoing phenomenon called "loudness war", mainly caused by a growing use of compressors in the ad and music industry to sound louder than the competitors.

With the help of this, every single soundtrack and audio program will be mastered to a target level of - 23 dB +/- 1 dB in context of the full scale (for a better imagination: 0 dB is the point no sound signal can exceed and where a sound signal turns into clipping).
Loudness is something like the perceived sound pressure level measured over a certain amount of time.
So in order get a mix towards a loudness target level of -23 dB, you need a loudness meter and you have to measure your soundtrack from the very beginning to the very end with your loudness meter (cause the target level of - 23 dB is always an average value - so, your soundtrack might start at - 33 dB or even will be at - 20 dB in the middle of the track - if it is at -23 dB at the end, it's fine so far).

Of course there are also limits for the maximum dynamics at EBU R 128 mixing (so, an untamed sound of a gunshot after a soft piano melody won't blast your ears after mixing it at EBU standards) defined in terms like "Maximum Short-term Loudness Level" (should not exceed -18 dB), "Maximum Momentary Loudness Level" (turns red in my EBU-adjusted loudness meter as soon as it reaches - 15 dB) or "Maximum True Peak Level" (should not exceed - 1 dB), but I usually just keep a fleeting eye on these parameters (because I mainly create music and no heavily dynamic cinematic special effects or stuff like that).

Many modern soundtracks are already mixed at target levels of -15 dB to -12 dB or something like that, leaving no big headroom for the signal peaks above (guess that was the time where sound surgery like peak compressing or brickwall limiting started, and where the dynamic of soundtracks has decline more and more).
So, soundtracks and audio programs mastered at EBU standards are around 50% to 60 % as loud as lots modern music - that's a similar loudness like the loudness of original sound mixes from the 80s.

The really cool thing is that you don't have to care about the signal peaks when mixing at EBU R 128 loudness standards because they always more than enough headroom.
Even in the master track the signal peaks will barely scratch the - 5 dB mark (and I don't use compressors or limiters in my soundtracks).
Saves a lot of time at mixing/mastering and provides a good uncompressed signal.

...

But enough small talk about the early foundations of my mixing concept.
Let's go to the next audio sample for showing the next stage of mixing I'm currently working on.

...


2) The new version with my over the last years developed new mixing concept
--------------------------------------------------------------------------------------------------

 

 

This is the new version I'm currently working on.

It's based on my newly developed mixing concept, which I already used for my Goldfinger remix called "Safe 'N' Sane Skater Heaven Superman", and it is mixed on my new professional Yamaha MSP3 studio monitors (Yamaha MSP series is the professional product line of the Yamaha HS studio monitor series, so it comes with an even better audio definition and a flatter, more natural and relaxed sound) in combination with my Fostex PM-SUBmini 2 subwoofer as well as with my Beyerdynamic DT 880 Pro studio headphones (finally with the silver ear cups giving me a much more neutral and natural listening experience) connected to my new Lake People G109-P headphone amp (which can finally drive these high-impedance headphones with ease).

Cause of the really outstanding audio resolution and truthfulness of the Yamaha MSP3 studio monitors (they also have no annoying hissing or humming noises at the teeters or woofers, so they are perfect as a relieable near-field studio monitor solution), I can hear much more details in my tracks and I can play much more with the reverb without being afraid of messing up the mix with that.

In addition to the EBU loudness standards as the early foundations of my mixing concept for better dynamics, peak signals and sound quality, I also used EQs with low-cut filters in the single tracks of this mix the first time.

Another big core of my new mixing concept is the use of a really helpful 2-channel surround feature of my DAW, which encodes surround information into a stereo signal and where you can place instruments and other sound signals in a graphical interface as you wish.
Wheter hifting a signal more to one side or to the front or back, getting a signal only to the sides and sparing out the center completely, reducing the stereo width by dragging the two stereo objects more together and to the center - no problems within a short amount of time with this useful tool.
So, it's no only good for creating a great imagination of depth and spaciousness in your mix - it's also good for separating the competing frequencies of your instruments by different placement of these, and for radically cleaning up the mix in this way.

If you want to read more or even get a little audiovisual impression of this feature, I'd recommend the thread "Creating a realistic impression of depth in stereo mixes" (especially under my comment with the title "Visual tools for creating a realistic impression of depth in stereo mixes") in the Music Composition & Production section of this forum.

...

I guess this was a nice and really succinct summary of the main elements of my new mixing concept.

...


3) The new version + master track EQ with low-cut filter
---------------------------------------------------------------------

 

 

This version is almost the same like the second audio sample, but in addition to that I also used a steep-edged low-cut-filter on the master track this time (low-cut filter with 36 dB per octave - starts slowly around 50 Hz, at 20 Hz it lowers the frequencies about 10 dB, and at 15 Hz it lowers the frequencies already about 20 dB).

You'll maybe hear a little difference between audio sample 2) and 3) - but it's not like the far bigger jump from 1) to 2).


I'm not sure if I will use a master track low-cut filter as a general device in the future of my mixing concept - still afraid of losing some crucial audio information between 30 and 50 Hz (especially the kinda earthy power of the bass and kick drum - not the reverb).

...


4) The new version + master track EQ with low-cut filter + aux reverb sends with low-cut filters for specific instruments
---------------------------------------------------------------------------------------------------------------------------------------------------

 

 

This final audio sample is based on the version for the previous audio sample.

But it has one big difference.
Instead of using EQ and reverb plugins as inserts for all instruments, I picked 4 instruments with the most critical (lower) frequency ranges, especially due to possible reverb mud, and used EQed aux reverb send with decent low-cut filters on the reverb effects for these instruments.

The 4 instruments on which I used aux reverb sends on are:

- the drums (whole drum kit)
- the electric bass
- the viola
- and the acoustic guitar playing the chords

I think that this made a bigger perceivable difference and cleaned up the bottom of the mix with the low-end and low-mid reverberation really well.

...

So, now I'm really curious about your opinions regarding the different mixing approaches (or the possible stages of mixing) and the sound quality.
And I would like to know which mixing results you like best or which might be the most promising ones.

 

Edited by Master Mi
Link to comment
Share on other sites

  • 3 weeks later...

How do you pan aux reverb sends in the context of the panning of the instrument or main signal source?
-----------------------------------------------------------------------------------------------------------------------------------

After finally getting aux effect sends working in my DAW some time ago, I've increasingly integrated them into my mixing approach due to the improved clarity, sound quality and enhanced sound design possibilities they can bring to a mix.
But there's one thing I'm still not quite sure about - how to arrange them in the stereo panorama (especially in the context of the panorama of the instrument or source signal) in the best possible way.


I've had various thoughts on this and am currently drawing on different approaches, for example:

A) Panning the aux reverb send just like instrument/source signal (so that the relationship between the instrument/source signal and its reverb send is not torn apart too much - could be useful if too much is already happening in the mix outside the panning of the instrument/source signal, but also disadvantageous if there is already too high a density of musical events in the area of the panning of the instrument/source signal)

B) Bringing the aux reverb sends away from the center of the panorama to the sides (might increase the carity of the track a lot - since I rarely pan instruments fully to one side due to the loss of spatial information of instruments panned like this, it could be useful to rather pan reverb effects more or even fully to the sides)

C) Panning the aux reverb send a little bit more to the side (on the same side as the source signal - e.g: source signal is panned + 3dB to the right, then the reverb send could be panned + 10 dB to the right - can be useful if the instrument in its panorama area needs a little more punch/less reverb and there is still some room for the reverb send in the panorama a little bit further to the right)

D) Panning the aux reverb send fully to the side (on the same side as the source signal - amplifies the effect of variant B somewhat, but the spatial relationship between the source signal and its reverb send is lost to a greater extent)

E) Panning the aux reverb send fully to the opposite side (the far extreme on the opposite side of the source signal, for example, if the instrument is panned + 3dB on the right side, the aux reverb send of the corresponding instrument is panned hard left - this does not seem to disturb the spatial relationship between source signal and reverb to such an extreme extent, while at the same time, the assertiveness of the instrument/source signal drastically increases in the mix)

D) Panning the aux reverb sends just where you've got lots of free space, how it fits your needs as a sound creator and like it sounds best (guess this sounds like some kind of a text book answer)


So, how do you handle the panning of aux reverb sends?
Do you have a general solution for this or do you rather use an individual approach depending on the instrument/signal source, music genre or the specific intention of sound design?




For showing some practical stuff let's go to some further audio samples of my Crisis Core remix I'm currently working on.

During the last weeks I could make some huge progress with this track - not just with the mixing, but also by composing lots of new stuff after playing, recording some further melodies for old and new instruments via MIDI keyboard and finally editing the content with the MIDI editor of my DAW.

Although the audio samples are still a few steps behind the actual mixing and compository state of my remix, the mixing of most instruments was already done at this point.
But I still had an issue with the electric guitar, where I wasn't sure how to pan the guitar reverb send the best possible way.


Just to give you an fundamental idea of the mixed instruments which appear in the following audio samples without any further changes (if I remember correctly):

- electric bass (plays almost fully in the center with a slight stereo width of around 2 % in order to make the bass sound a bit broader and with a less stiff
spatial impression)
>>> aux reverb send of the electric bass (a minimal scoring stage convolution reverb with low-cut filter) is panned like the instrument itself (still not sure if the mix sounds better when bringing the subtl bass reverb more to the sides)

- acoustic drums (play a bit more in the background between center and sides - stereo width should be around 50 % - source signal also has a reverb insert with a subtle EQed concert hall convolution reverb, which also makes the kick drum more powerful)
>>> aux reverb send of the acoustic drums (a subtl cathedral convolution reverb with heavy low-cut filter to add some airy vibes to the drum kit) is panned to the sides, leaving out the center (you will hear much more of this great effect shortly after the intro of my remix - it's not in the following audio samples)

- viola (panned around + 3 dB to the left side, source signal contains delay effect and also a smaller low-cut filter)
>>> aux reverb send of the viola (a subtl cathedral convolution reverb with a smaller low-cut-filter) is panned like the instrument itself

- acoustic guitar chords (fully panned to the sides, leaving out the center, a smaller treble boost from a vintage EQ is used on them)
>>> aux reverb send of the acoustic guitar chords (a delayed hall reverb with moderate low-cut filter) is panned in a similar way like the instrument

- trumpets (one of the new sections I've composed for this remix, panned around + 7dB to the left side, source signal contains a heavy low-cut-filter and a subtl, already low-shelved cathedral convolution reverb)
>>> no aux reverb send is used for this instrument

...

And now comes the critical part with the raw clean electric guitar, where I'm still looking for the best solution concerning the mixing.
In all audio samples the electric guitar source signal (panned around + 3 dB to the right side) goes through my guitar amp plugin Vandal (with a smaller overdrive stomp box, a special Alnico cabinet simulation and a stronger ping-pong delay effect) and a moderate low-cut-filter.
The used aux reverb sends for the electric guitar after audio sample 1 contain a subtle cathedral convolution reverb with a heavy low-cut-filter.

The differences in the electric guitar section are shown in the following audio examples with different mixing approaches for this instruments:


1) No aux reverb send - guitar reverb comes via direct plugin slot insert from guitar amp plugin
-----------------------------------------------------------------------------------------------------------------------

 

 

...


2) Aux guitar reverb send panned like the guitar
------------------------------------------------------------

 

 




3) Aux guitar reverb send panned about 7 dB more to the right side than the guitar
--------------------------------------------------------------------------------------------------------

 

 

....


4) Aux guitar reverb send fully panned to the right
---------------------------------------------------------------

 

 




5) Aux guitar reverb send fully panned to the left (the far extreme on the opposite side of the source signal)
---------------------------------------------------------------------------------------------------------------------------------------

 

 




Since I didn't like the sound results I got with a standard method for mixing electric guitars (like panning the source signal fully to one side and panning the reverb send fully to the opposite side - as it seems to have been done with the electric guitars at some points in this really awesome Maniac Mansion remix composition: https://www.youtube.com/watch?v=v-6Le36mlDA - but somehow I can't stand the sound from mixing appoaches where instruments - especially lead instruments - are fully put to the sides), I almost think the mixing approach from audio sample 5 works best in this case (especially at the point where the trumpets kick in).



But let me know your opinion about this topic and my different mixing approaches.



Besides, I just thought it couldn't even harm to additionally upload the...


6) Latest update of the remix section showed in the previous audio samples
-----------------------------------------------------------------------------------------------

 



In this version (which is based on audio sample 5) I slightly enhanced the trebles and brilliance of the trumpets and the electric guitar (so they can shine a bit more in the mix and so, they are also put more to the front as the lead instruments of this part), and I spiced up the drums section with a few variations.

 

 

Edited by Master Mi
Link to comment
Share on other sites

  • Master Mi changed the title to Cleaning up the low-end and low-mid sections in a mix - with single track EQ, master track EQ, EQed aux effect sends and other methods
  • 3 months later...
Posted (edited)

A few major further steps to improve the clarity and spatiality of the sound by cleaning up the center area of a stereo mix from less relevant or counterproductive audio information
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I haven't been able to continue working on the composition of my Crisis Core remix for the last few months due to a whole lot of work, both in my private and professional life, I was able to achieve a real breakthrough in my mixing concept shortly after my last post in this thread.

The result was a kind of improved LCR mixing method (LCR for left-center-right), which not only brings more clarity, more spaciousness and width to the mix, but also allows much finer panning for the spaces in between (instead of just simple LCR), with considerably fewer steps compared to doubling the tracks for processing a hard-left and hard-right side of a single instrument or sound source and with the aim of a mixing result where the sound is also completely convincing on a normal stereo system (and not only on studio monitors and or studio headphones), ...

... which was an important core concern of mine in this thread, as I was previously able to perceive audible qualitative differences in comparison to a really professional mixing and I didn't know why this was the case or why it was so clearly apparent on a normal stereo system.

I've also put in some audio samples around the end of this comment to show the further improvements in the sound quality of the mix.



I would almost call this audible problem I've especially experienced at standard stereo systems the "overloaded center effect".

The center in the mix (please correct me if I'm not quite right in my definition) is probably something like the crosstalk between the left and right sides of a stereo system (it is said that the center is the sum of the right and left channels divided by 2).
If you remove a large part of this crosstalk in the most sensible way possible (i.e. you remove as much irrelevant or counterproductive sound information as possible from the center area - especially concerning the reverb), you can achieve considerably more clarity in the mix and a better assertiveness of the individual instruments.

...

I guess in professional terms it's called "mid-side processing" where you remove certain audio information like instruments, synths and effects from the center to bring it to the sides.
A good traditional way to do it is by hard left or hard right panned signals and a counter signal on the other side (for a realistic stereo image without involvement of the center area) like in the LCR mixing method.

As soon as you use a softer panning (let's say, the pan knob ist only at -10 dB on the right side), you get also crosstalk of the sides with the center - which isn't the evil of mixing (it also can provide some interesting spatial information), but you should not overdo it in order to get a clean mix.
If just a few sound signals like drums and vocals/lead instruments fill the gap between center and sides, while the bass plays only in the center and all the other instruments are fully panned to the sides, it might be really beneficial for the mix.

Of course you can also mix the bass a bit more from the center to the sides and put the drums fully to the center and/or the sides depending on the music genre and your personal taste, but always keep in mind that you should not produce too much mid-side crosstalk, especially not with many instruments or other sound signals in the same frequency range, because it can cloud and clog up your mix much faster (especially in soundtracks with lots of instruments and various audio information - in comparison, this stuff won't matter too much at a solo piano part in terms of mixing quality).



In my DAW Samplitude Pro X4 Suite I use my integrated 2-channel surround feature (which simple encodes spatial information in a standard stereo signal in connection with a visual interface) not only for getting some more depth in the mix, but also for the regulation of the center/side ratio of the instruments, effects and all the other audio information.

It looks like this in my DAW:

HarpAuxReverbSendPanning(BothWithoutInvolvementOfTheCenterArea).thumb.PNG.5cbfb1132167c5d614c313333089d3d4.PNG

In this screenshot you can see the spatial settings and measurements of a harp playing together with its very own aux reverb send.
In the right part you can see my setting of the instrument within my 2-channel surround panner (just left and right channel are involved, center is completely left out), the aux reverb send of this instrument has a similar panning.

And in the lower left part with the vectorscope, you can see once again, that the harp with its aux reverb send is out of the center and widely panned.
Just note that this metering device is loudness-based or volume-based - this means that the louder the measured signal gets, the bigger the expanse of the graph will be.
So, if you want to check how far the signal is panned to the sides or how much it is in the center, you have examine the ratio beween width and height of the graph (big height und low width means, the signal is more in the center, while equal height and width means that the signal is on the sides - and a bigger width than height means, you are probably cheating with a stereo enhancer - but don't worry, tools like these doesn't sound good 'n' natural either).



Here's another screenshot which shows the panning of the new trumpets (including the measurements of the aux reverb send of the trumpets, which is exactly panned like the trumpets) in my Crisis Core remix:

TrumpetsAuxReverbSendPanning(BothWithoutInvolvementOfTheCenterArea).thumb.PNG.ad19e429deb96ff252f9187967035b61.PNG

As you can see, the center is not involved again (same thing goes for the aux reverb send, of course), and I've lowered the volume on the right channel by 7 dB.
This means that - with the help of the 2-channel surround editor I can set up a pretty fine stereo panorama without any center involvement and in only one track (plus one more track for the aux reverb send, of course)- with the conventional LCR mixing method I might have needed one track for the left side and one track for the right side (with different volumes and/or delay effects for each side) to create a similar stereo panorama (plus one or even two more tracks for the also hard panned aux reverb sends).



If don't have something like a 2-channel-surround editor in your DAW, just stick with the mentioned LCR mixing method to clean up (especially the center of ) your mix.

There might be also another solution for easily reducing the center volume of a track in your DAW.
Your DAW might also have some kind of the stereo editor for each track (in my DAW, I get into the stereo editor if I do a right mouse click on the virtual pan knob in the track editor on the very left side - after this, the window with the stereo editor for this track opens up, and there I can reduce the center volume under "Kanalabsenkung Mitte (dB)", but only with 6,02 dB at the maximum - don't ask about this kind of precision in a volume value, I really dunno why):

ReducingCenterVolumeWithStereoEditor.thumb.PNG.4828bc3c8f5edb6dc670eb5fce58de45.PNG

...

In the next screenshot, you can see how I panned the electric bass in this track (measurements only show the electric bass with deactivated aux reverb send, where the aux reverb send is fully panned to the sides, but at the same depth level like the instrument):

ElectricBassPanningWithoutAuxReverbSend(BassAlmostCompletelyIntheCenter-DeactivatedAuxReverbIsPannedFullyToTheSides).thumb.PNG.cf5827d0c6c60d0c613577361d239baa.PNG

Here you can see, that the electric bass plays almost fully in the center without the sides ("almost", because I like it much more if the bass hops a bit around the center without being a too stiff mono center thing - so I open the bass just a very little bit towards the sides, but only towards the lower surround-information-encoding stereo channels, which can be used to put the signal source more in the back of the mix).

The vectorscope and the directionmeter next to it show once again how the electric bass with the deactivated aux reverb send behaves in the stereo panorama.



And in this additional screenshot, you can see how the whole track (in the final showdown part with lots of different instruments playing together) behaves in the stereo panorama:

Vectorscope-ComparisonWholeTrack.thumb.PNG.903dfae3e9b44eb08378432647728b48.PNG
 

By playing and measuring the whole soundtrack, the vectorscope shows a really wide panning (and this already with my preferred EBU-R-128 target loudness of only around - 23 dB, while with a more common mastering loudness of - 15 dB the metering graph would even shoot over the edge of the vectorscope and even over the "L" and "R" letters) with a very clean center (where mainly the electric bass plays and only a few other instrument tracks like drums, power chords and some sort of leading piano arpeggios are panned between the center and sides, affecting both with a bit of crosstalk, giving the mix a bit more spatial feel - something that could be instantly ruined if you put too much crosstalking audio information in the center or between the center and sides).

All other instruments in the track are panned out of the center fully to the sides, but they still have their unique stereo pannings between the left and right side.

The aux reverb sends for all instruments instead are completely panned out of the center fully to the sides without any exceptions, which cleaned up the whole soundtrack a lot.
Either the aux reverb sends are panned the same (or in a similar) way like the instruments (same depth, similar left-right-side behaviour in the stereo panorama - but always without involvement of the center area, even as aux reverb sends for the few instruments which affect the center) .
Or, for example, if an electric guitar is panned with a volume of - 5 dB on the right side (so that the instrument is 5 dB louder on the left side), I've panned the aux reverb send for this electric guitar hard to the right side.



So, just a final summary on the mentioned (and some additional) changes I made in the mix to radically clean it up and to improve it further:
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

A) I got all the aux reverb effect sends out of the center fully to the sides (maybe one of the most important things I did in this stage of mixing for improving the clarity).
No matter how you do it - if by hard pannings like in the LCR mixing method or with visuals tools like a 2-channel surround editor, where you can remove the volume of center area - just do it.

B) I got also most of the instruments (except bass and the few instruments that are panned between center and sides) out of the center area.

C) I split the track with the acoustic drums into several tracks for the individual drum elements (bass drums, snare drums, tom drums, cymbals) - just to put different EQs on the different drum elements for improving the clarity within the drums section, between the drums and the other instruments and for additional cleaning up the area between center and the sides.

D) I reduced a few annoying or slightly clashing frequencies from certain instruments with peak filters.

For example, I made a little peak cut for the bass drums at around 200 Hz by 5 dB:

EQBassDrums.PNG.70c1bed90d82585c6ea52abf294fb15e.PNG

Nothing really earth-shattering - but it transforms the kick drum from a former "bop, bop" a bit more into a smacking "bip, bip" and makes the kick drum a bit more assertive against the electric bass in the mix.

I also made some peak cuts on the acoustic guitar playing the rhythmic chords:

EQAcousticGuitarChords.PNG.21a6d34c3d64d9e7e7ac7539a42d4869.PNG

It makes the guitar sound less blocky, much lighter and more stylish, letting a few other instruments "breathe" more in the mix.

E) I removed some overloaded, gimmicky and unnecessary effect plugin chains from my mix.
For example, in the previous mixing version I used two different reverb plugins on the acoustic drum kit - one as a preceding direct insert and another one as a low-cut-filtered aux reverb send - just to let the kick and snare drums sound mightier.
I completely removed the direct insert and only adjusted the low-cut-filtered aux reverb send for all drum elements in a way where the kick 'n' snare drums sound also powerfully reverberating - but way cleaner without unnecessary low-end reverb mud throughout the mix.

F) True to my previous motto "It's all in the mix" or "Complicated masterings are just missed opportunities in mixing", I removed the master EQ plugin again.

I somehow had the feeling that it was sucking a bit of the punch out of the mix in the low-end range because it was taking the range and power out of all the instruments in the lower frequency range - unfortunately also those that are supposed to play right there.

So, I went back to single-track EQing and made a few more adjustments to the relevant tracks.



And lastly, I want to show ya the final audio samples, where you can finally listen to the mixing progess I've made with all these steps and compare the last version of this audio sample (the results before the mixing approaches mentioned in this post) with the new version (the results after the newest mixing approaches)...


6) Latest update of the remix section showed in the previous audio samples (former version)
--------------------------------------------------------------------------------------------------------------------

 



(Somehow, the former audio sample doesn't seem to work in the new posting - gotta find somebody who will fix it, maybe the almighty IT janitor @DarkeSword. But not today anymore - until stuff gets fixed, just listen to the audio sample number 6 in my previous post.)
...



7) New mixing update of the remix section after radically cleaning up the center area (new version)
---------------------------------------------------------------------------------------------------------------------------

 



...

Feel free to join the discussion and tell me how you like the new approaches in my mixing concept. ))


For the next big update, I want to compose the last few things I still have in my vision for this remix and maybe even deliver a finished new remix version of the whole soundtrack for a much bigger comparison.

Edited by Master Mi
Link to comment
Share on other sites

  • Master Mi changed the title to Cleaning up the mix - with single track EQ, master track EQ, EQed aux effect sends, smart panning decisions and other methods

Cleaning up the mix panorama-wise (and creating special atmospheres with narrow pannings, hard/wide pannings and the pannings in between)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Good news.

Over the last few months, I've made further progress with the mixing and composition for my upcoming remix "Wings Of Freedom" for the game Crisis Core: Final Fantasy 7, which I'll explain in detail below, along with my thoughts on various approaches to mixing and some corresponding audio examples.

But for now, here's a rough summary of the things I added or changed after the last audio excerpt:


A) The electric bass, which I had previously given minimal latitude from the center to the sides towards more stereo width, I have now positioned uncompromisingly in the center without any further stereo width.

This has led to a significant improvement in clarity in the lower frequencies as well as to an even better separation between bass and drums (especially important in the later part of the remix, where some additional instruments enter the lower frequency range).

Contrary to my fears that this would make the bass sound too stiff and lifeless, this phenomenon did not occur, probably because the aux reverb send for the bass, which I've panned hard to the sides as usual, still caused a certain amount of movement in the stereo image.

Since the source signal of the bass and its reverb blend together in terms of sound, the bass reverb also moves the fully centered bass a little in the stereo image - and all in one with a much cleaner lower frequency area.


B) I made a few bolder EQ low-cut filter decisions at some instruments and their reverbs, for example at the viola and the acoustic guitar chords, which sound a little bit brighter und which leave more room for the instruments with lower frequency ranges resulting in a slightly cleaner mix.


C) I did some changes at some reverb settings for some instuments, for example at the acoustic guitar (the one that plays the chords), where I reduced the the amount of reverb in the aux reverb send a little bit.

The mix sounds cleaner, more based and powerful now, especially when some other instruments with stronger reverbs (like a harp, brass or electric guitars) kick in.
It also created a different impression of depth I like pretty much.


D) I drastically increased the dynamics of a few instruments via MIDI velocity dynamic settings to make them sound even more realistic, vivid and soulful, especially the acoustic guitar playing the chords.

At this point, a big thank you to the guitarist @pixelseph, who pointed out to me that the acoustic guitar for the chords in the former version still sounded too unrealistic or too much like a VSTi.

By the way, I've optimized the guitar strings' attacks even further so that the acoustic guitar escalates a bit more in exactly the right places, hits the virtual strings more aggressivly or produces a nice and natural-sounding crescendo in the chord progression.

I would also be really happy if you could perhaps add something to my thoughts regarding the panning of the electric guitars a little further down in this text.


E) To increase the artistic value, I drastically improved the composition (notation, timing and articulations) for some instruments, especially for the piano intro or for the harp parts (which are not in the audio samples here now, but which you will soon be able to hear in the finished version).

When I once played it to a somewhat more cultivated foreman on the building site, who is a musician himself and masters several instruments (as he always wanted to hear what I was working on musically), he said that the piano playing reminded him of a certain composer or pianist (unfortunately I've forgotten the name that I have probably never heard it before).

I took that as a compliment.
And the piano intro and harp parts got indeed really good.


F) I composed completely new stuff for a few more instruments.

I won't reveal too much at this point.
But the few new instruments you can already hear in the audio samples below are a brand-new vintage lead trumpet (always wanted to check out and use this really cool new Vita instrument from my DAW Samplitude - plays around minute 0:06 in the first audio sample) playing against the already known trumpets, and a majestic, powerful and kinda heroic sounding French horn (playing after minute 0:20 in the first audio sample) that leads from the more classic and orchestral part into the slowly raising power ballad part of the track.

I really enjoy the results I finally got with the new composition attemps.


G) Especially for these new instruments I created a wider panning, but I also thought about changing the sound and the panning for the electric guitars (the electric guitars that kick in around minute 0:35 in the first audio sample).

And with this I will come to the main topic of this post I've already mentioned in the title of this posting.

Keep in mind that I've continuously kept all (!) aux reverb sends and most instrument signals (except drums, bass and maybe the piano) out of the center area of the mix to get a clarity like this.
Since the instruments also have different depth levels in the mix, I created an aux reverb send for each instrument (except for the drum elements - there's only one aux reverb send for all drum elements, but I might experiment with this a bit and also change it in the coming versions).



But first, I'll give you a little impression of the new things and improvements with a proper audio sample that kinda represents the current state of the composition and mixing for my coming Crisis Core: Final Fantasy 7 remix "Wings Of Freedom":


1) Less hard panning for the trumpets & standard panning for the electric guitars
------------------------------------------------------------------------------------------------------

 



As you can hear, you also have a little bit more clarity in this version than in the last audio sample from my previous posting (apart from the enhanced composition).

...

I didn't change too much things at the panning for the electric guitars here.

I've panned the already existing trumpets around 30 dB more towards the left side (this means that these trumpets are around 30 dB louder on the left side than on the right side, just to get a rough imagination of the panning and stereo image of the signal - so, no fully hard panning, where you will also hear a bit of the dry signal of this instrument on the right side), while the aux reverb send for this instrument is panned around 90 dB more to the opposite (the right) side.

On the other side, I've panned the new vintage lead trumpet around 30 dB more to the right side, while the aux reverb send for this instrument is panned around 90 dB more to the opposite (the left) side.

And this is the version I prefer at the moment.



But I also tried a few different panning approaches I'll show you in the following audio samples.
And I tell you my thoughts about them and explain to you why I favour most of the panning, mixing and sound design decisions in the first sample.


2) Hard panning for the trumpets & standard panning for the electric guitars
-----------------------------------------------------------------------------------------------

 



In this audio sample I just changed the panning of the trumpets and the panning of the lead trumpet (but not the panning of their aux reverb sends).

And I chose a really wide panning for both of them.

I've panned the already existing trumpets 90 dB more to the left side.
And I've panned the vintage lead trumpet 90 dB more to the right side.

With the aux reverb sends of the instruments panned really hard to the opposite sides of the dry signals of the instruments, this creates a really great feeling of distance and lots of clarity in the mix.

But on the other side of the shield, both trumpet sections lost a bit of their power (this really disturbed me when listening to all the different panning versions on my HD MP3 player and on other devices).

Guess that's why you shouldn't put mighty brass instruments that are supposed to represent some sort of power and heaviness in the mix (similar like bass and kick drum) fully to the sides if possible.

Since both hard panned trumpet instruments are kinda panned like the also hard panned reverb aux send of their opposing trumpet sections, it could also be the case that one trumpet instrument gets weakened by the reverb from the opposing trumpet instrument a bit.



But how about a much narrower panning as a contrasting mixing approach for both trumpets?


3) Narrow panning for the trumpets & standard panning for the electric guitars
--------------------------------------------------------------------------------------------------

 



In this version I've panned the already existing trumpets only around 5 dB more towards the left side.
The vintage lead trumpet I've panned around 5 dB more to the right side (and again, no changes at the hard pannings of the aux reverb sends for both trumpet sections).

This one actually sounds really good concerning the two both opposing trumpet sections - this makes a really powerful sound of both trumpets.

But the reason I decided to rather go for the version showed in the first audio sample of this post was the fact that the viola (panned 3 dB more to the left side) and the electric guitar (panned 3 dB more to the right side), both of which can be heard quite clearly in the first 5 seconds of the audio sample, already have a really narrow panning (and this sounds good for the whole rest of the mix - so, I didn't want to change the established panning in these sections).

And a too narrow panning of both trumpet sections would conceal the likewise narrow panning of the viola and the electric guitar a bit.



If you take a closer look, the choice between narrower and a wider pannings of instruments and other sound signals may not seem so easy in some cases.

For example, I recently listened to the soundtrack "Everlasting Love" as a cover by the band Love Affair in the official audio version and the official video version.

The official audio version sounds like this:
 



Actually, I would have thought that this version would sound more appealing to me.

But the hard panning of the brass (apparently extremely far to the left) and the drums (apparently extremely far to the right - and then as a whole drum kit) somehow alienates me a little in terms of soundscape.

It sounds very clean in terms of mixing, but because the instruments seem so spatially separated from each other, the spatial impression and the musical context are somehow lost a little, while the power of the drums and brass also doesn't come across quite so convincingly.


I find the official video version with the narrow panning, which actually sounds a bit like a mono mix to me, somehow more convincing in terms of soundscape:
 



As far as the mixing and panning are concerned, it doesn't sound quite as clean as the official audio version.
But the drums and the brass in particular come across as really powerful.


But one version that really convinces me in terms of panning and the overall mixing is the original version of "Everlasting Love", which was sung by Robert Knight and which I got in the following version as a reference:
 



In this version, the panning sounds much looser, finer staged and more natural to me, whereby extreme pan settings for the individual instruments have obviously been avoided and something like a spatial coherence of the instruments in the overall sound image has been preserved.


I also really like the mixing and panning of the much later released cover version of "Everlasting Love" sung by the German-French pop singer Sandra:
 


...

But now let's go on with my own panning experiment.


4) Trumpets + reverb aux sends with identical panning & standard panning for the electric guitars
--------------------------------------------------------------------------------------------------------------------------

 



In this version I've panned the already existing trumpets (like in the first audio sample) around 30 dB more towards the left side - but I also panned the connected aux reverb send around 30 dB more to the same side (without involvement of the center of the mix, of course, but no hard 90 dB panning of the aux reverb send towards the opposite side of the related instrument).

A similar panning attempt goes for vintage lead trumpet, which I've panned around 30 dB more to the right side, just like its related aux reverb send.

This is a rather good example of how you shouldn't pan the trumpets, at least not in this specific case with my aspiration to get powerful, assertive and clean sounding trumpets in this part of the mix.

In this case, you can hear clearly that both trumpet sections get clouded, washed out and weakened by their own aux reverb sends.

Of course, there is no rule not to mix like that.
If it meets your expectations and the imagination of how the mix should really sound like in your vision, it's totally fine - for the viola, for example, I've exactly chosen a pan setting like this (instrument and its aux reverb send are both panned around 3 dB more to the left side), and it kinda sounds like I wanted it to sound (because I wanted the viola to sound less relevant and less dominant in relation to the other instruments in this part of the mix).

For the lonesome French horn after the trumpet sections, this panning approach also works pretty well - so, I used it for the French horn as well (until I probably find a way that might sound even better).



So, in my opinion, the composition, the panning and the mixing of the trumpet sections are considered done for now.

...

Now let's go to the part with the electric guitars (the two electric guitars kicking in after around minute 0:35 in the audio samples - the electric lead guitar and the clean electric guitar).

That's the point, where I'm still not fully sure how to deal with the sound design of the electric guitars and the panning.
But maybe @pixelseph or another electric guitar pro can deliver some nice food for thought here.


So, let's go back to the previous audio samples with the standard panning for the electric guitars as a starting position.

In these versions, I've panned the electric lead guitar with some ping-pong delay from my guitar amp plugin Vandal (direct insert effect) around 5 dB more to the left side, while its reverb aux send is panned fully to the right side).
And I've panned the clean electric guitar with a similar ping-pong delay from Vandal (also direct insert effect) around 7 dB more to the right side, while its aux reverb send is panned fully to the left side).

With the sound of the clean electric guitar sound I'm already really satisfied (for the panning there still might be still some better options) - no big or even nasty pedal and effect chains there (not even distortion, just a really nice and clean electric guitar sound with a bold low-cut filter setting, a nice ping-pong delay and a little bit of reverb via aux send with an even bolder low-cut filter).

...

For the electric lead guitar on the other side, I still think about changing the panning and the sound of the electric guitar.
I also think about creating a much more dynamic electric lead guitar sound.

At the moment (in the previous versions with the standard panning of the electric guitars), the electric lead guitar comes with an activated overdrive effect pedal (see small blue box over the pedal effect, which means that the pedal effect is active) from my Vandal guitar amp plugin, which looks like this in my DAW:

CC-FF7Remix-ElectricLeadGuitarWithVandalStandardSound.thumb.PNG.d69b87527b91bf5ff94c944e60df5804.PNG


As you can see, the drive knob of is pedal effect is not even turned on a little bit (for the sake of better dynamics).
But the tone and level are turned up maximally (this gives the electric lead guitar its strong sustain and a slightly more assertive but still controlled sound).

...

But how about making the sound of the electric lead guitar a little more dynamic, more natural and less processed as a lead setting and changing the panning of the guitar?


5) Less hard panning for the trumpets & hard panning for the electric guitars
------------------------------------------------------------------------------------------------

 



In this version, I've panned the electric lead guitar around 90 dB more to the left side, while its reverb aux send is panned fully to the right side.
And I've panned the clean electric guitar around 90 dB more to the right side, while its aux reverb send is panned fully to the left side.

For the sound of the electric guitar, I decided to deactivate the overdrive pedal effect completely and change the settings like this:

CC-FF7Remix-ElectricLeadGuitarWithVandalAlternativeSound.thumb.PNG.014fede6c971cc6c086bf6ed51596452.PNG


As you can hear in the audio sample, the electric lead guitar with the completely deactivated overdrive effect pedal experiences a much greater fine dynamic staging (the guitar sound becomes a little softer and quieter towards the end of a longer played note), whereby I have turned up the pre-gain knob minimally and the post-gain knob much more to make the long notes sound a little stronger and longer instead of letting them slowly fade out shortly after half of the played note (which would be rather disadvantageous for the desired assertiveness of the electric lead guitar in the mix).

I also turned the Curve knob in the voicing section of the guitar amp plugin all the way down so that the low frequencies are filtered out of the signal even more.

What bothers me enormously in this audio sample, however, are the occasional somewhat harsh tonal outbursts of the guitar sound in the higher frequencies, which is probably due to the fact that treble and brilliance were already boosted via timbre in the instrument editor, via the EQ plug-in and then again in the guitar amp plugin Vandal itself - a rather cutting, extreme sound setting, which was possibly softened and made a little "creamier" in the previous audio samples by the activated La Crema overdrive effect pedal.

Let's see, maybe I'll be able to fix this by lowering the high frequencies a bit or by adjusting the velocity dynamics of individual notes that break out tonally.

Otherwise, I would perhaps leave the sound of the electric lead guitar as it is in the previous audio samples.



But as far as the hard panning of the guitars in this audio sample here is concerned, I have to admit that - in contrast to the not entirely convincing hard panning of the trumpets in the second audio sample in this post - I really like it here in this part.

However, the hard panning for the electric lead guitar still sounds a bit strange to me.
But the hard panning of the clean electric guitar, which just plays as a melodic accompanying instrument, comes across really well here.

Let's see if there are further advantages in the sound if both electric guitars are panned less hard, similar to the less hard panned trumpets in the first audio sample of this post.


6) Less hard panning for the trumpets & less hard panning for the electric guitars
------------------------------------------------------------------------------------------------------

 



In this version, I've panned the electric lead guitar around 30 dB more to the left side, while its reverb aux send is panned fully to the right side.
And I've panned the clean electric guitar around 30 dB more to the right side, while its aux reverb send is panned fully to the left side.

So, two electric guitars have a similar panning like the two trumpet section - with the only major difference that the trumpets play a little bit behind the two electric guitars (even if a direct comparison is difficult, as the two instrument groups do not play at the same time).

This version with the not-so-hard panning for the electric guitars also sounds very pleasing to me, although I could also imagine the version with the hard panning for the electric guitars here.



And to fulfill all my wishes regarding the electric guitar section, I had another thought some time ago.

How would it be if I mixed the electric lead guitar with the preferred narrow panning (slightly to the left side) similar to the first samples in this post, while I mixed the clean electric guitar with a hard panning (to the right side)?

And to restore the balance in the stereo image, I would simply compose an additional track for an electric guitar (another one in a clean or crunch setting, perhaps with an additional wah-wah effect or similar stuff), which, with slightly longer pauses and a slightly lower pitch, would once again have a kind of dialog with the other clean electric guitar and be panned hard to the other side (to the left).

With the clean electric guitars, I would simply put the aux reverb sends to the opposite sides of the corresponding instruments as usual.
And with the narrow panned electric lead guitar, I could either also connect the aux reverb sends fully to the opposite side of the instrument - or to both sides.

I think I'll try it out during my coming composing session.

With this mixing appoach, I might be able to create a similar soundscape like in the song "Everytime We Touch" sung by Maggie Reilly - with the voice (would be my electric lead guitar instead of the voice), which has a rather narrow panning somewhere between both sides, and with one clean electric guitar and another slightly distorted electric guitar panned hard to the left and the right side.

And a soundscape like this is really beautiful in my opinion:
 




If you have the one or other feedback or inspirational thought on this topic, let me know. ))

Edited by Master Mi
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...