Sign in to follow this  
sephfire

Official Mastering and Production thread

Recommended Posts

... it works on some styles, like mentioned before... if you're on rock/metal stuff, it sure needs some ambiance, to put de instruments toghether... I myself use a little bit of room reverb stuff... quite little... I don't like the things too shiny...

Share this post


Link to post
Share on other sites
Okay, I read the brief mastering overview from gamedev.net linked in an earlier post, and I just want to say that it's almost all very bad advice.

I'll take it part by part...

Wrong. Utterly wrong. Mixing does.

Opinionated meaninglessness. Good songs sell, quality has just become cheaper and easier to attain. Tracker music will always be awesome. And the Prodigy used Cubase and a Roland W-30. Liam was a keyboardist, and he played those beats in by hand, dammit!

More crap. There are lots of ways to ensure quality sound. Using good samples and synths is a start, but a decent microphone and a quiet room, along with a quality mic pre, is far from impossible to aqquire.

NO! You use a formant shifter!

And then you get the problem that plagues beginner mixers... way too much reverb on your song. Reverb is a condiment, and this method is like drowning a meal in ketchup. Combine this with any type of mastering compression, and it sounds unbelievably ugly. Turn the reverb off. Mix your stuff dry. Add a tiny little bit of reverb on only the stuff that needs it; you should barely even notice it.

Using a compressor on every track can flatten an entire mix. A much better technique is just turning the fader down and EQing a track to taste. I'm mostly of the opinion that compressors should be used to tame peaks and keep volume levels from getting out of hand... Except on drums, where compression is best abused.

This whole section is a recipie for phase cancellation problems. If you want "phat stereo effects", pan your instruments. Detune your oscillators. Just about anything will sound better than stereo enhancement.

There are tons now, but most of them are pretty useless. Good compressor plugins will do all the work for you in this respect, and harmonic exciters tend to give a very harsh sound to everything. I'd avoid them.

These all sound awful. The drums distort on the 2nd mp3. The third one just detuned the osc's on the synth and added delay. The ones titled "premastered" and "postmastered" just add more distortion to the drums, and plainly haven't had any thought behind them. The unfortunate conclusion I have to give you, is that this is all bad advice from someone who doesn't know what they're doing.

well, i'm a huge amature at mixing but i'm almost certain that you can use reverb in a way that it isn't considered "A condiment"

now correct me if i'm wrong, but on reasons reverb there is this thing, not sure what it is, but it's called ER-Late> or something like that and i think its purpose is just for this, to make something sound "Transperent" but in your face at the same time. I do that alot on cellos, and make 3 cellos, pan one to the semi far right and semi far left and one dead center so the cello goes "around" things like drums. instead of muddying them up.

thats my 2 cents, i'm sure it's wrong as hell xD

Share this post


Link to post
Share on other sites

Does anyone know if there is anything like that out there? Like, clips or arrangments that you can download that haven't gone through final mix down and mastering. I guess it sounds a bit odd, now that I think about it, but still, if it's out there, it would be great for practice. I know we can all use our own arrangments, but that kills the idea of solely focusing on the mastering aspect by adding biased or your own visions of your song back into the equation. I'm just trying to "isolate the variable" so to speak and just focus on mastering to learn more deeply about all of the sonic properties. Ok, this is way too long now. If anyone knows, let me know! Thanks!

Share this post


Link to post
Share on other sites

I'm new here and to mastering audio, and I'm really bad at it, but I think I have a good tidbit of advice.

When making a song, I listen to it over and over for hours and start to get too used to it, making it harder for me to be more discerning. When it comes to mastering, I missed a lot of parts that I simply just got too used too. I found that an easy way to get a fresher look on things is to turn my headphones around backwards. It reverses all the panning and the song sounds familiar but new. Peculiarities jump out immediately. Of course, the best way to prevent fatigue is to take breaks and work on it in smaller chunks of time, but sometimes that's not an option.

I hope that helps.

Share this post


Link to post
Share on other sites
I'm new here and to mastering audio, and I'm really bad at it, but I think I have a good tidbit of advice.

When making a song, I listen to it over and over for hours and start to get too used to it, making it harder for me to be more discerning. When it comes to mastering, I missed a lot of parts that I simply just got too used too. I found that an easy way to get a fresher look on things is to turn my headphones around backwards. It reverses all the panning and the song sounds familiar but new. Peculiarities jump out immediately. Of course, the best way to prevent fatigue is to take breaks and work on it in smaller chunks of time, but sometimes that's not an option.

I hope that helps.

yeah, i know exactly what your talking about.

here are some tips to get around it.

1.) Mute all the instruments, take a 10 minute break or so, go to an important section of the song, solo the "voice of the song", is it too loud? to soft? Adjust it till it's comfortable to your ears.

then keep bringing in instruments in the order you feel is most important, and keep adjusting the volume levels.,

but in some cases, like the kick drum, it might be to "attacking" , and not neccessairly loud.

i learnt that there is like this difference between how loud something actually is, and how loud it's tone is, or something like that. and how loud a tone might appear changes depending on your volume level, but there is DEFANANTLY, a volume where they are more balanced than the others, I believe this volume level is slightly louder than the volume your "bass" starts to cause rumble on the table, and the reason the bass is overexaggerated is because the table is rumbling.

remember to take off all the effects processing when you did this, this goes for things like EQ too. like when you solo your first instrument dry, enable to eq and compression you did on it to see if you did to much

and don't go for a balanced mix, think hard on what your original idea of the mix was, like, a nice warm mid, slightly thumpin' bass, and highs which don't sparkle, or stick out, etc.

an example of what not to do, is to make the song as balanced as possible, things just sound weird that way. like, the highs aren't sticking out, but nether or the mids, but the place inbetween them is.

Share this post


Link to post
Share on other sites
well, i'm a huge amature at mixing but i'm almost certain that you can use reverb in a way that it isn't considered "A condiment"

now correct me if i'm wrong, but on reasons reverb there is this thing, not sure what it is, but it's called ER-Late> or something like that and i think its purpose is just for this, to make something sound "Transperent" but in your face at the same time. I do that alot on cellos, and make 3 cellos, pan one to the semi far right and semi far left and one dead center so the cello goes "around" things like drums. instead of muddying them up.

thats my 2 cents, i'm sure it's wrong as hell xD

Good tip!!!! You can also direct your reverb. You can have your dry instrument play from one side, but it's reverb will shoot out to the other side only by the way you wire it in the back. It sounds wicked cool in dance music!

Reverb can also sound good in a small dose on the entire mix. It might be the exact thing needed to gel your mix together, but it might also kill it.

Share this post


Link to post
Share on other sites
Heh, coincidentally I went to the public library today and picked up the Home Recording Handbook. Though it may not be specifically for mastering, it has a lot of information on current technology and technique. Many professionals were involved in the book, including some from such respected newsstand titles as Computer Music and Electronic Musician.

I also picked up The Home Book of Musical Knowledge and Classical Music 101, but those're more theory and history than anything.

The titles you mention seem like fairly sound titles, i'll check them out as well, thanks :)

I like the Mixing Engineer's Handbook!!

Share this post


Link to post
Share on other sites

Hi there,

anything is anoyong me a lot when i try to mix,

it's i am not sure of what i am doing, and i think it's what is hard at this... you must know how all work...

And if i am not sure to have understand ?

The thing hard to understand for me is the compressor,

for it can raise the volume, by compressing the sound, it's very used for the drum,

and for what purpose, i have to use it in other instrument ? i think to put some instrument with a louder sound but whitout kill the other, i mean if i want a instrument to be louder in sound than a other if i don't compress it, i want to put it higher with the volume, it kill other sound that is under this volume, well i think compress instrument exeption for the drum is for that purpose, i am right ? or just for a instrument is the same volume than other but seem not loud as the others in the same volume.

Also i take a way to mix the master for my song to be a little more soft, just - 3 to - 5 DB on the 2800 hrz, what you think also of this, Good Idea ?

Well the most important is for the compressor, if i am wrong, i realy need to know why i have to use it exept for the drum^^

Share this post


Link to post
Share on other sites

I've recently become obsessed with expanding my knowledge of all things audio, and have been consuming any information I can get my hands on.

This site has some pretty good info:

http://www.tweakheadz.com/guide.htm

And I've also gotten a lot of information from the magazine Electronic Musician. They have all sorts of articles from interviews with professionals, guides on recording and mastering, and gear reviews.

http://emusician.com/

Share this post


Link to post
Share on other sites

I like having double (or triple) layers of reverb on instruments that are lacking in sample quality. It works great on cheap sounding synths too (like FLStudio default synths). One trick is to put very short, warm reverb (turn the high cut and decay way down), and put longer reverb on top. Mind you, you can get away using heavier short reverb (like 20 to 50%), but use moderation on long reverb (no more than 10 to 20% usually). Most drums can even be improved with short reverb. Short reverb will also give a more warm, analog sound to your synths, and is good for everything, including basses.

Saturation is another great way to beef up your instruments, especially when they're thin sounding or of low sample quality. Mind you, saturation will greatly increase chances of conflict with other instruments in the mix, so make sure to use EQ afterwards. Saturation is basically a clean/overdriven type of distortion (i.e. FL Blood Overdrive).

Compression/limiting can make drums sound bigger, but when using it on anything else, try to use it in moderation and only AFTER you've done all your other mixing. As for side chaining, it's good as an effect and not much else. Side chaining everything into the kick sounds awful. Same thing with Autotune, good as an effect (in moderation), not a substitute for good singing.

Over 90% of problems can be solved within the arrangement. If all of your instruments are playing in the same register/frequency range, then EQ is not going to solve all your problems. Also having good samples helps, but it is not an end-all, and being able to mix properly certainly helps.

Share this post


Link to post
Share on other sites

Great tips!

I'm an amateur at production. I'm working on a 15 min orchestral arrangement, and so working on the production is extremely tedious. They say take breaks in between to refresh your mind. These ongoing breaks is fine for short arrangements, but this long arrangement has taken me 4 years now (on and off). Everytime I refresh my mind, I end up changing it over and over again. If I compare the beginning of the arrangement to the latter part of the song, it's too different and inconsistent production-wise (ie. the beginning is quieter, the latter is louder, which is unintentional). It's been plaguing me for years. With a shorter arrangement, everything is closer together thus fixing production issues is much easier. Not with this medley. I need to find an efficient way to make it sound well balanced and consistent throughout. An orchestral genre has a lot of dynamic variance. I need to make it sound "human." Note velocities are all over the place. I have over 20 channels and over 500 measures to work with :sleepdepriv:

Also this

I'm new here and to mastering audio, and I'm really bad at it, but I think I have a good tidbit of advice.

When making a song, I listen to it over and over for hours and start to get too used to it, making it harder for me to be more discerning. When it comes to mastering, I missed a lot of parts that I simply just got too used too. I found that an easy way to get a fresher look on things is to turn my headphones around backwards. It reverses all the panning and the song sounds familiar but new. Peculiarities jump out immediately. Of course, the best way to prevent fatigue is to take breaks and work on it in smaller chunks of time, but sometimes that's not an option.

I hope that helps.

I'm just getting no where :(

What should I do with an arrangement this size? I'm afraid of using the compressor on this kind of genre. Most of the time I adjust the note velocities, but going over 1000 notes individually is just too time-consuming. I need to come up with a standard to streamline this process on the entire medley, not just the individual subparts of the medley. Any suggestions?

Share this post


Link to post
Share on other sites
Great tips!

I'm an amateur at production. I'm working on a 15 min orchestral arrangement, and so working on the production is extremely tedious. They say take breaks in between to refresh your mind. These ongoing breaks is fine for short arrangements, but this long arrangement has taken me 4 years now (on and off). Everytime I refresh my mind, I end up changing it over and over again. If I compare the beginning of the arrangement to the latter part of the song, it's too different and inconsistent production-wise (ie. the beginning is quieter, the latter is louder, which is unintentional). It's been plaguing me for years. With a shorter arrangement, everything is closer together thus fixing production issues is much easier. Not with this medley. I need to find an efficient way to make it sound well balanced and consistent throughout. An orchestral genre has a lot of dynamic variance. I need to make it sound "human." Note velocities are all over the place. I have over 20 channels and over 500 measures to work with :sleepdepriv:

Also this

I'm just getting no where :(

What should I do with an arrangement this size? I'm afraid of using the compressor on this kind of genre. Most of the time I adjust the note velocities, but going over 1000 notes individually is just too time-consuming. I need to come up with a standard to streamline this process on the entire medley, not just the individual subparts of the medley. Any suggestions?

Usually a combination of note velocities and expression control (probably CC11, depending on your sample library) will be your best bet in getting natural sounding dynamics. Usually I use velocity to control basic level, but since velocity layers don't always transition smoothly, I use expression controller automation to fine-tune things and glue dynamics together on the phrase level rather than on the individual note level. I have a default value for the expression controller that I come back to after each phrase -- this helps me keep the overall level of the instrument in generally the same place so the dynamics don't vary too wildly.

Some libraries let you link a velocity crossfader to a MIDI CC so you can draw or record CC curves to change velocities instead of editing individual velocities (sometimes this sounds good, sometimes not, since crossfading velocities makes multiple samples play over top of each other -- basically bleeding the velocity layers over one another -- which can sound odd for solo instruments).

For achieving consistency in levels across the entire arrangement, it may be a good idea to come up with a rubric of sorts that tells you what different dynamic levels in the various instruments "look" like. For example, you may decide that mp in the strings is what you get when your velocities are around 20 and the expression controller is around 90 and p is when velocities are around 20 and expression is around 50. This may help you think more consistently about levels over the whole piece, because it will allow you to use objective values to determine dynamic levels rather than simply doing what kind of sounds right and then discovering that it sounds way different from something you did somewhere else.

The unfortunate truth, though, is that no matter how you go about handling dynamics and sample use, the editing process is extremely time consuming and usually involves some level of attention given to every single note in the piece. It helps to have a clear idea of how you're going to approach it so you don't end up wasting time redoing things, but ultimately you just have to decide how much effort you're willing to put into it and then bite the bullet and do it.

Share this post


Link to post
Share on other sites

A quick question, one I didn't want to make a thread for:

I am trying to upload some tracks for an album on my last.fm page.

I made all of these tracks on TFM Music Maker, and I think that it exports only in mono or something.

Now, the MP3s I have are recorded in Mono, but last.fm only takes stereo. Is there a way to use a program like Audacity to turn mono tracks into stereo?

EDIT:

Problem solved- figured out how to do so in Audacity ;0

Share this post


Link to post
Share on other sites

I'm reading the topic slowly 'cause it has tons of great info... but I'd like to drop a question about hardware: what do you guys think about M-Audio's Fast Track Pro? Is it possible to do nice sounding mixes using that soundcard and pro headphones like AKG k702 (which I'm planning to buy soon)?

Share this post


Link to post
Share on other sites
what do you guys think about M-Audio's Fast Track Pro? Is it possible to do nice sounding mixes using that soundcard and pro headphones like AKG k702 (which I'm planning to buy soon)?

Good headphones/monitors are vital, but unless your soundcard is a piece of crap you shouldn't need any other hardware. You do need your ears, tho. ;)

Also, an audio interface isn't the same thing as a sound card.

Share this post


Link to post
Share on other sites
Is it possible to do nice sounding mixes using that soundcard and pro headphones like AKG k702 (which I'm planning to buy soon)?

In a word, yes. I haven't used the Fast Track Pro, but as far as being able to get a good mix, an audio interface is an audio interface, and it will definitely get the job done.

I use an AKG K702 set and it's awesome. It will be easily the best set of headphones you've ever used. (Extremely long burn-in period, though, so if the bass seems like it's sort of weak, just keep using it for another few weeks.)

Share this post


Link to post
Share on other sites
In a word, yes. I haven't used the Fast Track Pro, but as far as being able to get a good mix, an audio interface is an audio interface, and it will definitely get the job done.

I use an AKG K702 set and it's awesome. It will be easily the best set of headphones you've ever used. (Extremely long burn-in period, though, so if the bass seems like it's sort of weak, just keep using it for another few weeks.)

Good headphones/monitors are vital, but unless your soundcard is a piece of crap you shouldn't need any other hardware. You do need your ears, tho. ;)

Also, an audio interface isn't the same thing as a sound card.

Thanks! :D

Share this post


Link to post
Share on other sites

I've had FL Studio 9 for over a year and have been making some songs using the program, however, I've always had problems with adding the final touches and tweaking parameters altogether to get better quality sound. I was hoping to get some helpful advice here so I can get past this frustrating step and start finishing songs instead of leaving them on file for months (years).

I've read the mastering/production guide from gamedev.net on the first page, and that helped give me somewhat of an idea of what I should be doing, but it also confused me to no end. I understood some of the sound synthesis concepts it talked about like ADSR, mostly because I've toyed around with parameters beforehand, but I didn't see the practical use of applying the concepts, like how I would know if I wanted a low pass filter as opposed to a high pass filter.

Something else that was new to me was where it talked about pitch frequencies, such as a certain number of kHz being considered bass and another interval considered treble and how you don't want the bass voice and kick drum on the same frequency. I'm sure that in some of my songs, the bass and kick drum would be inseparable by ear at some parts, but I didn't know how to fix it and still have trouble separating the two.

Anyway, on to the mastering problems. I'll be honest, the biggest problem I am having is making my songs not sound fuzzy and staticy in some parts when I upload them to Youtube or a song-hosting site. However, when I export my songs to MP3 in FL Studio, no problem seems to be heard when I play it in Windows Media Player. Is it my sound card? Some compression problem I caused in FL studio? The way I exported it in FL Studio? I don't know.

I guess I should talk about my sound setup and the way I master. To be blunt, my setup sucks. Here's a screenshot of my settings:

flstudiosettings.png

As you can see, my sound card is pretty bad. I tried running ASIO4all v2, but for some reason it gives me a lot of underruns and doesn't fully buffer my sounds. Although it says the output is through stereo speakers, I have my 20-dollar Sony MDR-XD100 headphones plugged into one of the speakers. That's pretty much it as far as my sound setup goes.

For most of my songs, I have Fruity Compressor and Maximus on the master track, though I don't tweak the parameters on Maximus at all (I know I probably should, but I'm afraid of messing something up). For the current song I am working on, a bossa nova piece, my compressor settings are as follows: threshold, -11db; ratio, 3.5:1 ; attack, 15 ms (unchanged); release, 200 ms (unchanged); and type, hard (unchanged). Like all of my other songs, this song suffers from quality problems after it's uploaded on the internet.

Other than adjusting the volume bars for individual tracks and toy with the EQ settings, that's all the mastering I do. When I'm done, I set the bitrate anywhere from 160-220 and export the song to MP3.

Although I'm sure this has been a long read for most, I'd really appreciate a helpful response. I'm in no way in a position to spend a fortune on hardware or software, but I can break my budget a little if it means that my music will sound better.

Share this post


Link to post
Share on other sites

A quick question.. I tried googling it, and I'm afraid I still don't completely understand. What does "processing" mean? I hear it everywhere on OCR, and I feel like I have a general idea of what it means, but I've never been completely sure for some reason. Like how halc said he used tons of processing on his last posted mix "The Great Blizzard of 9X."

Share this post


Link to post
Share on other sites

Processing is everything you do to a sound. If there's reverb, eq, guitar amps, side-chaining, stretching, detuning, pitch correction... anything, there's processing.

If there's "a lot of" processing, there's either a lot of effects, or the effects are drastic. Or both.

Share this post


Link to post
Share on other sites

I'm not a fan of too much processing, frankly, but at the same time, I like enough for it to sound professional. And even then, it's hard to make my pieces as loud as the music of most of the other music I hear without destroying the quality of the audio (i.e. clip distortions). :???:

Share this post


Link to post
Share on other sites

I've recently started learning how to do production and mastering and I have some questions.

1. I am working on a song and it is quite near the 0dB border, getting a bit past it in some notes. The thing is that when I listen to my song and then to a well mixed song, I notice the well mixed song sounds louder than mine.

From what I've read here, maybe removing some unused frequencies should help me, but I'm not sure. Am I right?

On another note, I've not used a compressor on the master track, but I think I should avoid that until the mixing sounds good by itself.

2. The other thing I've noticed is that other mixes sound very sharp, and even though mine doesn't sound muddy or anything, it is less sharp than other mixes (maybe using free samples works against me, but I think there's a better explanation :-P ). Any tips on that?

Any help is appreciated!

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this