Jump to content

SnappleMan

Members
  • Posts

    1,732
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by SnappleMan

  1. The reason I sometimes put a compressor on from the beginning is because it sets up a more musical way to hit 0dB. It's like a pre-warmed sound that lets me mix the tracks with just volume before I start using compression on each track. When I do it this way I end up using a lot less compression on the individual tracks. The flip side is mixing the tracks without anything on the stereo bus, and this always sounds better to me at the end, because the mix is cleaner and more punchy with more dynamics, but the issue there is that I need to apply compression to the stereo bus afterwards, and it's more difficult then because it can tip the balance of the mix forcing me to go back and tweak it. So yeah it basically comes down to mixing through the compressor so you don't have to adjust it later VS mixing through nothing and having to tweak things after you apply the compressor at the end. As for RMS metering and all that, I use Cubase since the mixing console has a very good master meter that gives me RMS, Peak, AES17, and EBU R128. It also lets me monitor the true peak, momentary max, short term max and integrated max levels in LU and LUFS, as well as being able to switch between the different standards across the world. Basically it's got it all . Ideally I'd love to be able to mix without regard for RMS or EBU standards. I would mix everything to -20dB and put faith in people to use their volume knobs, because really that's how I prefer to hear music. But we all have to stay competitive, especially those of us doing this professionally, (and even more when we're doing music competitions like DoD ). The only real way to get noticed in certain situations is to just be louder than the previous guy...
  2. I do share a bit of your frustration at the notion of forced "improvements", but there are so many people doing so many interesting things with mixing/mastering that I just like hearing all the countless perspectives.
  3. There's nothing to be confident in. You're only challenging yourself by assuming that there's a right and wrong way, and that you have to convince people. Uncompressed music sounds great, as does compressed music, but to say that sound quality goes down when compressing a master bus is untrue. What you might mean is that fidelity goes down, which it does, but that doesn't decrease the quality of the music. Quality is relative, and there are many styles of music these days that depend on degradation of fidelity to get a desired effect. And also "today's music" is being compressed on the master channel because we're trying to emulate the natural compression that made "yesterday's music" sound the way it did. Music has ALWAYS been compressed, the only difference these days is that people have the ability to push that compression to all kinds of creative new levels. Do things the way you want to do them, that's the beauty of music.
  4. There's always backlash, though. I mean if trends didn't calm and normalize then we'd be mixing music to 0db rms by now. You can argue that any music will damage your hearing, and anything other than a live performance in front of you is degraded because of the electronics involved. All I care about is hearing things in a manner that pleases me personally, fully aware that what I think is pleasing is a result of established trends, but who cares. Compressors were initially created to deal with high noise floors when recording, that doesn't mean they should be discontinued for mixing purposes.
  5. Yeah that's part of it. Mastering also includes very precise EQ, compression, imaging, limiting and sometimes even reverb. The big differences are that in mixing you're working on the balance to get the most out of the instruments, and make them fit together correctly within the musical context of the song, in mastering you're bringing out the most of the mix and trying to make it as sonically pleasing and even as you can. Another big part of mastering is getting the cues down, song lengths, fidelity, and formatting so you can deliver a package in whatever medium is required. The end result in mastering these days is usually a well rounded sounding disc image that can be used for production.
  6. Think of it as compressing a single sample. Once the mix gets to the stereo bus it's basically a sample playing through it. Break it down to something like a snare drum, if you compress the snare drum itself, you get some characteristics out of it, where as compressing the entire song gets different characteristics out of the snare drum because of the context that it's in. Compressing the entire mix at that stage also brings up things between the transients and kind of cooks the song together a little bit. If you compress too much then you get artifacts and squashed dynamics, and that's where the balance comes into it.
  7. Well, what we do in our DAWs in the "mastering" stage of a songs production is different from what actual mastering of a song/record is. Most of what mastering is cannot be done in the mixing stage alone, that's why the process is broken up. I think what we here refer to as mastering is the process of getting cohesion in the mix at the last stage of the chain. And really, no amount of individual track mixing can do what compressing the entire stereo bus does, even if you're just barely compressing it. But yeah, RMS is getting pretty unreliable for me these days, because what I hear doesn't always match what I see in number values. I've started metering in EBU and things are finally starting to translate correctly between the actual physical world and the digital world.
  8. Oh hell yeah, -6dB is ridiclously loud, but I like being loud. And that's another thing worth noting, that the elements in your song contributing to the loudness make all the difference. When I mix to that level I have very carefully EQed guitars being hard panned and all kinds of things that aren't going to sound as loud as they actually are. Stereo imaging also tricks your brain with volumes and levels, and harmonic exciters (maximizers) do a huge job of raising RMS without destroying the mix. With careful EQ and compression I get mixes that are loud but never clip (even with all stereo bus effects off, including limiters). Going back to the issue of stereo bus headroom, no channel going into the stereo bus is peaking past -10dB, this gives my master compressors all the headroom they need to keep from breaking the sound.
  9. I generally shoot for -6rms, but that really depends on the song. As far as gain reduction and compression, I do all that before I start mixing. I set up my stereo bus chain before I start the mix, usually with a ratio of no more than 4:1 (usually 2:1) with a brickwall limiter on. I make up the gain via harmonic exciters instead of just driving the compression. This way I mix everything with regard to the compression, so I end up using much less compression on the individual tracks and get a cleaner/punchier/louder mix in the end. Also, doing things this way lets me fine tune the overall gain stage of the entire mix so I can get up as high as -2rms without heavy clipping (though I've never felt the need or have been asked to go that stupidly high). The thing I go on about most of all is always headroom. I believe that the management of stereo bus headroom is the deciding factor in the punch:volume ratio of a mix. My philosophy is to control the low end so it doesn't eat up bandwidth and don't crowd the high end (everything doesn't need to shimmer) as to keep the master compressors from overworking in a negative way.
  10. That's a synth, specifically a detuned square wave. You make it by using two square wave oscillators, one of which is detuned very slightly. The key is to make sure the patch is monophonic and portamento is turned on.
  11. Meteo can't get over the fact that people have jobs and earn income.
  12. I've yet to hear any actual music from this studio...
  13. A synth, as it's known these days, is a piece of hardware or software which creates sound via oscillators or wavebank that can be altered via ADSR envelope (in most cases). A keyboard is a piece of hardware which has piano keys. Some keyboards have built in synths, some are just MIDI controllers (soundless keyboard made for controlling sounds on a computer or synth). The question of synths comparing to PC software is a tough one these days. Most people use a hybrid method of keyboard and computer. Usually a keyboard/synth is used for sounds and controlling sounds while the computer is used for tracking, mixing and generating sounds of its own (via virtual instruments and software synths). If you want an all in one solution then you are looking for keyboard workstations, which are keyboards/synths that have built in sequencer and audio recorders allowing you to make complete songs on them, but they are very expensive (around $3000-4000). Here's a quick breakdown of how things stand these days: Keyboards/Synths: + You can play them to generate sound. + The sounds tend to be musical and programmed for performing, easier to sound good. + Onboard controls let you be expressive musically. + Easier maintenance, few problems. + Generally more musical and inspirational. Computers: + Sound quality of instruments is better. + Much more powerful for creating songs. + DAW software tends to be intuitive and very good for building music. While keyboards sound great and are musical, they'll never have sounds that are as realistic and detailed as virtual instruments on a PC. That doesn't mean that a PC is better, since it's more difficult to use huge convoluted sample libraries. Think of it as programming music (PC) vs playing music (keyboard). That's why the combination of PC and keyboard is crucial as it gives you a good balance of the two. If you're starting out I suggest you go with the combo of a MIDI controller keyboard and a DAW for your computer. Good DAWs include: Cubase Cakewalk Sonar Apple Logic FL Studio Ableton Live Reaper Do your research (Google is best) and figure out what you like. There are many demos to try and learn (and you can use Reaper for free until you feel comfortable enough using it to buy it). If you're serious about music then you'll do serious researching and figure things out, no one person can tell you what you would like best. Look up some MIDI controllers, get a list of the ones you're interested in, then go to a music store and try them out, get the one that feels best under your fingers. But yeah, most important thing is that you use google and learn about this stuff, you'll be much better off.
  14. No reason to be rude. Especially not to a potential client. The reason $40 is too low is the circumstance of it being strictly for personal use. If it's $40 for a small game project that'll get some kind of decent public release, the it's worth it since you're getting exposure and potential new clients because of it. When it's something that's stated as personal use, the only compensation you get is the initial payment, so in that scenario you really need a fair amount.
  15. You need to offer more than $40 for something like that. An immense amount of work goes into making music and $40 is simply not enough to compensate that much time and effort if you want true quality work. You get what you pay for. Always.
  16. We would have won if you chose one of my amazing titles that I had thought up. But whatev. Anyway, I think what really kicked your ass was the complexity of the source, it's very deceptive. I got too excited at the notion of working such an ambitious piece of source material that I forgot about the whole novice part of this endeavor. Still, even with the difficulty of the source I think you did okay, and hopefully you picked up a couple things from what I told you. So good job! <3
  17. Are you using the latest EZdrummer update? Are you bridging a 32bit version of it in 64bit FLStudio?
  18. Also, the performance makes a bigger difference than the mixing in most cases. So study some metal tracks and learn how those drummers play the songs, then just copy it via your sequencing.
  19. You're wasting your time as a musician if you see learning a new skill as a waste of your time. Especially one as essential as basic mixing. Just play with EQ and compression until you get sounds you like. For snap and punch you generally want to EQ your kick drum with a boost at 80-120hz, cut around 500hz and boost at 3.5khz and again at around 10khz. For snares you want do boost a little at wherever the punch is (generally around 200hz) and then again at 3-5khz and 10khz. These are all relative to the fundamental frequencies, so to find those just play with a single band of EQ on a sound until you hear the most unique qualities pop out (like the ring or snap of a snare, each of those are focused on a certain EQ range). Get that down and then you can start thinking about compression.
  20. Or if you are brave enough to add some variety to your melodic instruments then a steady drum track is necessary to keep a focal point to the arrangement while everything else is doing harmonically and rhythmically varied things. Designating the rhythm to JUST your drum track is the first step in having a repetitive beat, regardless of how frantic you make that drum track.
  21. The best thing that you can do is find some way to perform your drum tracks instead of just mousing them in (MIDI keyboard or even qwerty input which some DAWs support). Play along with the track for a while and you'll most likely start playing different variations of the drums naturally. But the most important thing is to make sure that the beat complements the bass and general groove of the song, some of the very best drum tracks are the simplest.
  22. Hey Chump Change, get off your ass and contact me, we got work to do.
  23. Alright, now that magfest is over SIGN ME UP!
×
×
  • Create New...