Jump to content

zircon

Members
  • Posts

    8,297
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by zircon

  1. I've done the first two fights of AQ40 (NOT Sartura). Very good items overall from there, no reason not to do it imo.
  2. Actually, tier1->tier2 could barely be considered an ugprade if you want to look at it that way. Many Shamans prefer Earthfury over Ten Storms for PVE, for example. I see rogues using their Nightslayer set rather than Bloodfang sometimes as well, and I myself see little reason to spend all my points on Transcendence when I can get UBER upgrades like Pure Blementium Band, Rejuvinating Gem, etc. In comparison, however, the sets in AQ are VERY good. Neither Prophecy nor Transcendence even holds a candle to Oracle, in my opinion.
  3. Aside from billing you, Blizzard appears to be incapable of doing anything on time.
  4. ReMixing Tips Part 5: Production Values Ok. This one is going to be a bit long and a little more abstract than my other tutorials. Bear with me as I think there's some useful information here. First of all, I'd like to present my definition of "production values". To me, production values are the technical aspects of a song - the implementation of it, if you will. A song is really only notes on paper, or a melody in your head, until it is actually executed in some way. You can have a brilliant composition entered into your sequencer, but if you don't pay attention to the production values (things like choosing the right samples, making good recordings of the different parts, choosing proper effects, balancing volume levels, and so on) then your final product will suffer as a result. When it comes to ReMixes, chances are you will be dealing with mainly things that are generated from your computer rather than a lot of live recordings, but I'll try to cover both these areas. What I intend to go through in this tutorial is how to best approach the production of a song in order to make it sound as good as it possibly can. Keep in mind, once again, that just about everything you are about to read is my own personal opinion on the topic. I've developed a methodology primarily based on lots of practice, thus, what you are getting is a personalized approach. Nonetheless, I've also learned a lot by reading interviews, listening to well-produced music, taking classes on the topic, reading articles, browsing forums like this one, and talking with more experienced musicians. So, I didn't completely make everything up off the top of my head Anyway, let's begin. Determining the Style The style of your mix is the most important thing to keep in mind when you are considering how to produce it. Now, before you say "But I don't want to lock myself in to just one style", the song you have completed (or are in the process of writing) is likely more easy to categorize than you think. Let's for the moment set aside terms like "trance", "hip hop", "metal", or "jazz" for a moment. I am assuming that if your mix falls squarely into one of those categories than you already have a pretty good idea of the general sound you are going for. Think about HOW you are constructing your song. Is it going to be centered around a vocal line, accompanied by sort instrumental parts? Is it going to be fast-paced, with a driving rhythm? Is it going to be mostly made with synths? Is it going to be a solo instrumental piece? Try to narrow down your idea for what the song should have. For the sake of analysis let's take a look at my ReMix "Calamitous Judgment". I basically knew that I wanted to integrate orchestral instruments (because I think they can add a dramatic flavor to the song), as well as a strong drum groove, and synthesizers. You may not have any other specifics beyond these kinds of things - for example, what KIND of orchestral instruments, or what KIND of drums you're going to be using for the groove - but it's very important to the process! If you honestly have no idea what kind of ReMix you are doing, not even the slightest clue, then spend time in your sequencing switching around between different sounds and narrowing down the style from there. The reason why this is vital in the production process is because it impacts the rest of your decisions from here on out. People always tend to ask me very general questions such as, "How can I make my drums better?" and I really have trouble responding to that, because every style is different. If you are doing an acoustic rock song, "making your drums better" might involve finding a set of realistic drum samples and writing a very humanized sequence with lots of fills and variations in the patterns. If you're doing an energtic breakbeat song then your biggest problem is probably not the pattern of the drums, but how the samples are processed. Two very different outlooks on the same problem.. all based on what style we're dealing with. Live vs. Sequenced (MIDI) Another important production decision is whether you are going to use recordings of musicians playing live, sequenced MIDI sounds, or a combination of the two. As I said earlier, more often than not, ReMixes tend to use parts that are MIDI sequences played through software sounds. However, don't discount the value of having a live instrumental or vocal part. Here are my general tips on the subject. * Drum parts suffer the least from being sampled/sequenced. From free, high-quality sample sets like NSKit 7 to relatively low-cost libraries such as BFD or Drums from Hell, the overall quality of drum samples is simply stunning. If you have a good rhythmic sense, you can listen to music with grooves that you like, and reproduce them pretty easily in a sequencer. * Vocals are on the opposite end of the spectrum. Sampled vocals usually come across as cheesy and out of place, outside of "ahh" and "ooh" type choir sounds. You CAN use vocal clips tastefully, but if you want a full sung or spoken line, you're better off writing it yourself and finding someone to perform it. * Orchestral sounds are difficult to sequence realistically, and high quality sampled ones are sometimes hard to find, but because it is unlikely that you have a full orchestral at your disposal (for a host of reasons), you're better off sticking to samples and learning how to use them well. The exception is.. * Solo orchestral instruments. If you want an expressive violin, cello, or horn line played by a single musician, it is going to be really, really hard to do well with samples. Even the best commercial samples don't sound very good solo. Thus, my recommendation would be to either reconsider whether you want a solo orchestral instrument part at all, or look around on these forums for a performer who can help you out. They're more common than you would think. * A piano line can go either way. Many ReMixes on this site use sampled piano, including some of the piano-only ReMixes, and some of them are entirely sequenced by mouse (rather than played in on a MIDI keyboard). I would lean towards using a sampled piano rather than going to the bother of recording a real one, as chances are the samples would sound better. However, if you only have free samples, and/or sequencing piano is not really your thing, a better option might be to write out a MIDI or sheet music and give it to a performer with either a good recording setup or a good MIDI recording setup. * Electric guitars can also go either way, but they tend to sound less realistic more often than not. If you're doing rhythm guitar stuff, or sustained chords (particularly powerchords), you can get away with using samples. Lead guitars can be sampled also but if you really want to do a rock or metal style remix, NOT a "synth metal" one, consider a live performer. If you're going for a more synthy or retro sound, on the other hand, by all means use all the electric guitar samples you want. * Acoustic guitar, surprisingly, can sound pretty good - even with free samples. Once again, if it's a mix centered around acoustic guitar, you can always find one of many guitarists here who regularly play acoustics, but for most purposes, using samples will suffice. While these aren't all the instruments available, this should give you a good starting point for how to approach sampled parts vs non-sampled ones. Balancing Multiple Parts Aside from solo pieces, you're probably going to have at least three or four different sounds of some kind going on in your mix. Part of the production process is balancing these parts in an intelligent manner. This involves managing the volume, panning, and effects of the individual parts. Please note this is NOT really mastering in the traditional sense, though if you do it right, people will likely say your song sounds well-mastered Here's where the choice of style comes in. If you're doing an upbeat dance song, you'll want an emphasis on the drums, with even more emphasis on the kick and snare. You can do this by increasing their volume, adding compression and distortion, and EQing up resonant frequencies. For other styles, the drums don't need to play such a big role. For example, for a soothing ambient mix, your drums can just sort of be a "wash" in the texture; add reverb/delay, EQ out any piercing frequencies, reduce their volume, remove the strong bass frequencies of the kick, etc. The exact choices are up to you and without hearing any specific mix, it's impossible for me to give much more detailed advice than that. I can, however, give more generalized advice.. * Whenever the melody is playing, unless you are specifically working in a very ambient style, it should be clear and present. Usually whatever instrument I have playing a lead is more present in the mid to upper frequencies. If it's playing too many low notes, or it's too much of a bassy instrument, it won't stand out. EQ can remedy this. Also, adding a bit of reverb and delay, along with some VERY light distortion, to a synth or lead guitar can make it cut through the mix much better. * Generally, it's not a bad idea to EQ down the low frequencies of harmony instruments, such as synth pads, strings, choirs, and rhythm guitars. This would be around 20-250 Hz. You should distinctly hear the bass instrument (be it a guitar, synth, whatever) as well as the kick if you have a drumline going. If they're "lost in the mix" then chances are your harmony instruments are muddying things up, and you need to either have them play an octave or more higher, or EQ their low end down. However, don't OVERDO this. You shouldn't be rolling off every instrument, and, especially for synth pads, the character of them can be killed if you EQ their low end down too much. If the low end is muddy, you might also consider simply reducing the volume of the offending instruments rather than EQing. Try it both ways. * Don't overdo it with sustained instruments, or sustained instruments playing chords. I would say that one of the key problems I hear with muddy mixes is too many sustained notes going on. It's easy to lay down a bunch of different synth pads or string soundfonts playing a bunch of pads, and they might sound great by themselves, but when combined with everything else, it can be hard to clean up the resultant mud. My suggestion is to have no more than one or two sustained pad instruments going on at once (unless you're really adept at processing + EQing). Go easy on the reverb/delay on those instruments as well. * Reverb and delay on drums is usually a bad idea. I know this is a big generalization, but again, a big problem that I constantly hear is too much reverb on drum parts. Delay, 99% of the time, just makes the drums sound very annoying. If you absolutely must have some reverb on your snare, for example, start with a wet dry ratio of 0:100 (full dry) and then SLOWLY inch it up. If you're on headphones, mix it a little drier than you would prefer, because most people tend to mix in too much reverb on headphones. * Don't be afraid to make use of volume automation to make parts fit better. You might have a string section playing some opening chords before the main melody comes in of your mix, but even though the strings were the focus before, they might be too loud when mixed with the melody. Use your sequencer's automation function to drop the volume of the strings when the melody comes in. Most listeners will focus on the melody immediately anyway so it won't seem unrealistic or unnatural. If it does, try adjusting the volume BEFORE the melody comes in so that it's a gradual change (a curve). * For a basic drum groove (with bass drum, snare, hats), usually your bass drum should be loudest, followed by the snare, followed by the hats. Shakers and tambourines should be about the same volume as the hats. Toms can be about the same volume as the snare, and ethnic percussion like congas and bongos can be a little softer than the snare. Overly loud hihats or extra percussive parts can very easily ruin an otherwise nicely-made song. When in doubt stick to the simple kick+snare rhythm. * When panning, it's usually not a good idea to have anything too far to one side unless it's counterbalanced by a similar sound on the other side. This is a common technique in recording guitars, for example; do one take in mono, pan it "hard" (full) right, do another one that's as close as possible to the first in mono, pan it hard left. The same technique can apply to other sampled instruments also if you want to get some stereo width. This works especially well for synthesizers if you're going for a "fat" sound for genres like trance. Equalization This is a broad topic, for sure. But the EQ is one of the most important tools in your arsenal when it comes to shaping sounds. My strongest advice here would be to NOT be afraid to make big changes to EQ settings (though there is an exception which I'll explain in a minute). I routinely boost or cut bands by 10db or more to get the exact sound I want. This is especially true if you're mostly working with synth sounds. Most common synthesizers use subtractive synthesis, and most patches within those synths use lowpass filters, which will be leaving the low frequencies in there. If you're like me and you love sounds lik the 303, you'll be dropping bass frequencies and boosting mids-highs all the time. Carve 'em up! The exception to this policy is when you're working with vocals, instruments exposed for a solo, and orchestral sounds. With these, you have to be more careful. The human voice has a naturally wide frequency range, which should generally be preserved. Making large EQ cuts or boosts can make the voice sound very unnatural. The same applies to recordings or samples for solo parts. Everyone knows what a piano sounds like, and if you have it exposed in the mix but heavily EQed, it will sound strange. Keep the EQing to a minimum in this context. Orchestral sounds can be like this as well. If you're doing a mix entirely with orchestral sounds, or if you have orchestral sounds exposed at some point in your track, it will hurt their character if you EQ them too much. Subtle adjustments are all you should need. Of course, if you're fitting them in with synths and other non-orchestral instruments you have more leeway when it comes to EQ as they are not the focal point. Layering Layering is an often underlooked, but very powerful tool that you can use when producing a mix. If you're designing a groove-centric track but you just can't seem to get the right drum sound with your samples, don't give up. Load multiple samples of the same thing - three kick drums, two snares, two toms - send them through the same effects, and mess with their volumes to find a nice balance between them. You might have a very deep bass drum hit that has lots of power, but it has no presence anywhere else. So, you can then load a very snappy kick with higher frequencies, layer them, and now you have a kick that might fit your mix a lot better. This doesn't just apply to drums, though. One technique that I use a lot applies to synth basses. Check out the following track and listen to the synth bass that comes in with the main melody. www.zirconstudios.com/newsong.mp3 This bass sound is actually comprised of two separate patches. One is the deep bass you hear at the very begin of the song. The other is a saw wave patch that is filtered, and then the filter rises (or opens) at the beginning of each note all the way up to the high frequencies. The saw patch has all the low end EQed out but the mid frequencies emphasized. When this patch is layered with the other bass patch, it creates a subtle but cool (in my opinion) effect that makes the bass sound more interesting overall. Pads are another thing that can benefit greatly from layering. Now, just to be clear, I did write earlier that you don't want too many pad sounds at the same time. To be more specific, what I meant was that you don't want too many distinct pads at the same time. One of my favorite tricks for making atmospheric pad patches is taking a choir, running it through a phaser, reducing the volume, and then layering it with a soft subtractive synth pad, reducing the volume of that as well. Because both the volumes of these two sounds are reduced, they combine to be about the same volume as most single-layer pads, but the final sound is more interesting than any of the individual sounds would be. There are a ton of things you can do with layering. These few ideas should get you started. The Big Picture Sometimes, working on individual parts or sections of a mix still might not reveal overall problems. Take a listen from start to finish of your work. Is the texture or soundscape always the same? If someone moves their Winamp playback cursor at random points in the song, will they be able to tell about at which point of the song they are in? Some might argue this is more of an arrangement or composition issue than anything else, but I think production has a lot to do with it. Simply put, outside of a few cases, you don't want to have the same instruments throughout the mix. Even the most interesting drum grooves can become a bit grating if they play through a whole mix. Listen to your mix with a really critical ear. There should be some sort of dynamic in what textures you're using. Let's say you have a 45 second introduction where you're bringing in all the different parts. Then, from :45 to 1:45 you're playing the melody with all of those parts. A common mistake is then letting all of those instruments keep going from 1:45 to the end of the song (say, at 3:30). Bad idea. You don't want to fatigue your listener; maybe at 1:45 you can drop out the bass, remove a few layers, and bring in some new instruments to play the melody/harmony. Then, you can drop out a few more at 2:15 before bringing back everything else (along with one or two new tracks) at 2:30, the finale. At 3:10 things die down and you have your outtro. This works a lot better than simply leaving everything at :45 the same and adding new things on top of it. Of course, depending on the style you are working in, you may not want a typical song structure. That's fine. However, the same rules still apply. You don't want a "wall of sound" assaulting the listener at all times. A major complaint that many people have about modern music in general is that it tends to be completely squashed and maximized to the point where there are no dynamics of any kind. While having "maxed out" volume levels is great at some points during a song, it's just a poor decision to have that going at ALL times. Don't fall into the same trap. Mastering Mastering is the very last thing in your process. Honestly, you should not have to do a lot here. Provided you have at least somewhat adhered to the stuff I've been talking about above, this part should just be the finishing touch - the "polish", if you will. There is no one right way to approach mastering, but generally, you want the overall volume of the mix to be at a level where the listener doesn't have to constantly adjust his or her volume control becaues the mix is too loud, too soft, or too variable in dynamics. You also don't want there to be any overly powerful or overly weak frequencies. Usually my mastering chain looks like this; compressor -> EQ -> limiter. Since you might not be entirely clear on what compression and limiting actually do, here's my explanation (quoted from another topic I wrote in). - A compressor works like this. You set a THRESHOLD volume level. When the waveform reaches this threshold volume level, it is reduced in volume. You control how MUCH it is reduced in volume with the RATIO control. 1:1 means that for every dB of sound above the threshold, 1 dB of sound will go through - in other words, no effect. 2:1 means that for every 2 dB of sound above the threshold, only 1 dB will be heard. 5:1 means that for every 5 dB of sound above the threshold, 1 dB is output. And so on and so forth. Thus, if you only want to compress something a little bit, you wouldn't use more than 3 or 4:1 compression. Finally, the compressor has a GAIN knob to increase or decrease the overall volume of the sound AFTER the limiting. Now, a limiter is simply an extreme form of compression. It has a very, very high ratio (sometimes infinite). In other words, as soon as the sound hits the threshold, you could send 50 dB through it and it will only output the threshold level. You can effectively use a compressor as a limiter if you just set the ratio really high. Other controls to be aware of - Attack: This is the time it takes for the compression to take place. This is usually measured in ms. If you have an attack of 15ms, that is pretty quick, but some sound will still be uncompressed. So, if you're compressing a snare heavily, you'll hear the *THWAP* right at the beginning and then within 15s the sound will be compressed. Release: The time it takes for the compression to stop after the sound has gone below the set THRESHOLD level. Usually between 200-800ms. Any longer and it's going to sound funny. Knee: This isn't on every compressor, but this basically controls how "hard" the compression activates. When the sound hits the threshold, does it limit it very sharply right off the bat, or does it ease into it? Here's a diagram of what this looks like: -10dB is the threshold, for that image. So what are practical uses of compression and limiting? Let's say you have a recording of a guitar. Throughout the recording, you have quieter parts at -28db, and louder parts at -8db. That is a 20dB dynamic range, which is pretty big. So, you set your compressor to a threshold of -16dB, a ratio of 3:1, and gain of +10dB. So now, all the parts louder than -16dB are being reduced to around -16dB. Your loudest peak before was -8, and the ratio is 3:1. 8/3 = 2.6dB, so your loudest peak after the compression will be -13.4dB or so. So, the quieter parts are still -28dB, but now the louder parts are -13.4dB instead of -8. That means that the dynamic range is now 14.6dB instead of 20. But wait, didn't this REDUCE the overall loudness? AHA! That's where the GAIN comes in. We set the gain to +10dB, meaning that the quietest parts are now -18dB, and the loudest ones are -3.4dB. This is pretty loud in the grand scheme of things, but not uncommon for a commercial track. Now, think about it - if you had just pumped up the volume by 10dB before, the peak level would have been +2dB, which is clipping. If you had done that then thrown a limiter on it, you'd still have a big dynamic range. So, compression is very useful. Of course, in an actual song situation you usually aren't constantly measuring the quietest and loudest parts of your song, so you won't have exact numbers. If you DID, you wouldn't even need a limiter because you'd know exactly how to set your compressor so that nothing would clip. Because this is not the case, it's ALWAYS good to put a limiter set to about -0.2dB or so at the very end of your mastering chain to make sure that some really loud spikes didn't get through the compressor. Some limiters also offer an input drive or saturation function that will boost the volume before limiting it, creating a slightly distorted but often pleasant sound. Finally, there are multiband compressors/limiters. Why would you use these? Think about the following situation.. you have this kickass bassline and kick part. Man, it rules. You gotta keep the volume jacked up so people can hear it. Then you also have a cool vocal line that's sort of floating above everything else. If you're just using a normal compressor, because you have the bass parts boosted, they're going to trigger the compression of the entire waveform. So when the bass part gets dropped a few dB because it's going over the limit, the vocals might not even be close to that threshold yet. Oops. A multiband compressor gets around this problem by separating the audio into THREE bands (low/mid/high), with individual compressor controls for each. This way, you can compress just the bass but not the vocals, or vice versa. Or anything else, really! By the way, this should also explain why compressor plugin presets are not useful. How COULD they be? The nature of compression is such that you have to tailor it to individual tracks. The only time you should be using presets is if you've designed (or come upon) a preset that works really well for a certain type of sound, and you know how to recreate that sound well. This is the situation with me. I have a preset I created for my compressor and limiter that works extremely well for nearly all the original electronic songs I do. However, I can do this because, well, I've written enough original electronic songs that I tend to use the same production techniques in all of them. Thus the approach to mastering is going to be the same. Here's an MP3 example of compression in action. The first loop (played for 2 bars) is uncompressed. The next one has a threshold of -15dB, 4:1 ratio, and a little bit of gain. Same peaks as the first one, but you can hear it sounds a bit louder overall. The third one is pretty extreme, with something like a -30dB threshold and 10:1 ratio with lots of gain. Again, not TECHNICALLY louder than the first too but it certainly sounds that way. http://www.zirconstudios.com/Compression.mp3 -- While it's your choice whether or not to use compression (and tweaking a compressor is really the only way to find out whether you think it would work or not, though in my experience it usually does), limiting is essential as the final step in the mastering chain. Set the output to -0.2db. The attack (if any) should be 0ms. The release is usually somewhere between 200-800ms.. that's up to you to tweak, but 200-300 is a safe number. This will ensure that you have no clipping anywhere in your mix. In regards to the equalization, you should not be doing anything major. If when listening to your final mix you feel the need to drop any frequencies more than 3db or so, you need to go back and edit individual parts again. You did something wrong in the balance phase of the process. If you were careful earlier, the most you'll have to do here is perhaps shaving off a little bit of bass, or emphasizing the highs a bit. Encoding Not really part of the production process per-se, but if you plan on making your song available to other people, you'll probably be converting it to MP3. I think you can't really go wrong with the RazorLame software, which operates off the excellent LAME mp3 encoding engine. There's no reason not to use VBR (variable bitrate) which basically maximizes quality while minimizing size. VBR is the only way that Disco Dan's "Triforce Majeure" mix from Zelda 3 sounds so good, despite fitting under our size limit and being a very rich and long mix. There's no reason to go above 224 or 256kbps max VBR, and some would argue even those numbers are high. In terms of minimum VBR, you can go all the way down to 32kbps and not really hear any hit in quality. However if you have a piece with a huge dynamic range, like an expressive piano solo, or an orchestral piece, you might want to bump that minimum up a bit. For more detailed information on encoding, I'm probably not the person to ask. Usually, if I'm trying to really maximize the quality of a mix while keeping it under the 6mb limit (I did the encoding for Wingless' last two remixes) I pretty much just tweak different settings gradually until I get it as close to 6mb as possible. Conclusion Production, like any aspect of music making (or just about anything in life), is primarily about practice. The tips I've listed above will hopefully be of some help to you when refining your own technique, but they are compliments to practice, not substitutes. So, when producing a song, try to take breaks to keep yourself objective. If you've been working on something all day, get some sleep and approach it the next day with new ears. Bounce it off some non-musical friends and see if they notice anything you missed. Listen to the mix on different sets of speakers or headphones. Above all things, listen to a work in progress all the time! Some might disagree with me on this, but if you keep listening to the same track while producing it, and you try to stay as objective as possible, you can REALLY fine tune it and make it shine. It's obsessive, but it works for a whole lot of people. If all else fails and you really don't know how to achieve a particular sound, listen to a song which SOUNDS really good to you, and try to break down what gives it that nice sound. I know many artists who do this, and while it seems a bit counter-intuitive if you're trying to create your own sound, it's much more helpful than you'd think. The understanding of what leads to a given sound or feel is powerful in that you can add that to your existing repertoire of techniques and even combine it with them. In closing, feel free to post here with comments, questions, additions, criticisms, or whatever else you want. Now go make some music!
  5. Like I will have 3 MIDI tracks, and 2 audio tracks, and everything is fine and dandy. Then I add one more audio track, and now one of the first 2 is lagging. My WIP I just posted (Rockin' Four-Two) has sync problems all over the place as a result, whereas if I mute all but one audio track, all of them are actually dead on. -Ray Wait, so they're lagging in relation to eachother? Are you sure they're lined up properly?
  6. Easily fixed problems don't mean a mix will be rejected. I personally have accepted plenty of mixes with easily fixable problems. Objectively speaking when I designed the arrangement here I did it in such a way to incorporate a good level of interpretation, variation, and original material. If this was put through the panel the arrangement factor would be fine every time. The vocals would be the only thing, if ANYTHING, that people would have a problem with, but again I have a pretty good grasp of vocal standards and this passes the bar in that category. Let me be clear for a minute. As I posted a few times, we didn't finish this entirely in one night. Most of it was sketched out and conceived in one night. The lyrics were all written out, the basics of the arrangements were set down, the guitar rhythms were set, and so forth and so on. But Jill's vocals were recorded on her setup at home, Taucer's guitars, Shonen's part.. those were sent to me online a few days later where I integrated them with the rest of the mix on my own computer and polished it from there. DLux's part was the one recorded on my mic, and as you can hear the recording quality is significantly lower. If everything else was done on that, you'd be able to hear it! Absolutely! Your feedback is much appreciated. Just clarifying a few things is all.
  7. Make sure there's no time stretching occuring in the FL sampler (set all the knobs to "Reset" or "None"). Otherwise there should be no reason that would happen.
  8. This is a really great track that I enjoyed quite a bit. One thing that endears it to me is that it's very similar stylistically to the kind of music that the television show MONK tends to play throughout the shows. Anyone who has watched it will know what I'm talking about. You could seriously sub this in for the old theme song (written by Jeff Beal) and it would fit perfectly.
  9. I like snap a lot. Personally I don't usually use this many kicks in a pattern (and I'll usually vary the velocity a bit) but that's because I don't normally do hip hop stuff. Regardless this is cool.
  10. My roommate uses the KRKs and since I'm about one foot away from where he sits, I hear them as well. To me they sound very nice. He likes them a lot also.
  11. Tapco S5s and the KRK Rokit series are around that price range, and I know they're good.
  12. With budget monitors, you sort of get what you pay for. There are a whole lot of factors that can color sound coming out of monitors, from the monitors themselves, to their placement in relation to eachother, their placement in relation to you, to your room, etc. Nonetheless it's definitely not a bad idea to have a pair as another reference. Keep in mind also that you're not looking for the BEST sound necessarily, just the most TRUE sound. That is why Yamaha NS10s were so popular for years among many engineers. They were not particularly good-sounding monitors, but they most accurately represented the home stereo systems of the time period and thus gave the most true mix.
  13. Uh.. why wouldn't you just use an audio clip for that? Because that's exactly what you CAN do with them. I assumed that you were just arbitrarily choosing to use the Sampler, but now I'm wondering if you are aware of how audio clips work at all.
  14. First of all, I love FL. I had to take a full course on Logic (which I did well in) and I hate it. FL does everything better, in my opinion. I did Lover Reef, which had tons of audio stuff, entirely in FL. Anyway; Gol talked about this at one point. He said that the sampler will read time information from the file if it is encoded there. In other words, properly encoded files should not be stretching at all. And he's right, they don't. However, in the event you do encounter this, go to the Channel Settings tab, right click on the "Time" knob, and click None. I run into the same thing myself with some WAV files. Sometimes it sets that "Time" knob slightly so you can't even see that it's moved, but it has a noticeable efffect.
  15. Well, it's not necessarily that it's heavier or lighter than a real piano or MIDI keyboard. Semi-weighted just means it's "somewhat realistic" and fully-weighted means it's "very realistic", basically. There are pianos with very light actions and pianos with heavy actions. They respond different ways depending on how much pressure you apply to the key. Anyway, I think a safe bet would be semiweighted. Nonweighted (what I use right now) is OK but it's really not any good for any complex performances, imo.
  16. "Synth action" basically has no sort of resistance at all. It is not at all realistic in terms of comparison to piano action. Most smaller MIDI controllers are synth action. In regards to half weighted vs full weighted (or hammer action/graded hammer action) I wasn't quite sure of this myself so I decided to give Sweetwater a call. I was actually put on the line with a sales engineer who was an experienced pianist so I felt like I got someone who really knew what they were saying. Basically, he said that the difference between the two action types is somewhat hard to articulate, in that they both operate off the same concept; they add resistance to the (usually plastic, but sometimes ivory) keys in order to make it seem more like a real piano. Fully weighted keyboards are apparently just more like a real piano in terms of the way the respond to different "touches", the effort it takes to move the keys down, and the speed at which they come back up. Semi weighted is more of an approximation of that effect than an exact emulation. Edit: GLL beat me to it. Lemme build on what he said a bit. Chances are, it's true that full vs semi weighted is a distinction that you won't care about unless you're a piano player. Even the sales engineer I spoke with said that - fully weighted keyboards are designed for people who really want the feel of a piano. That's not relevant for someone who's not trying to use a keyboard for that. For $200 you can pick up a nice M-Audio or Studiologic semiweighted controller; the full weighted version would probably be $100 more or so. In regards to using a keyboard as a drum controller, hell, I did it at the NYC meetup. It feels natural to me. Hitting a bunch of different boxes is less intuitive, in my opinion - I've tried both. It's easy for me to remember "C is the bassdrum, E is the snare, F# is the closed hihat, G# is pedal hihat, A# is open hihat, and B is a high tom." It's not easy for me to remember "The 2nd pad from the left on the 2nd row from the top is a snare". All the pads look the same. I just have a hard time doing things that way.
  17. Typically, I approach things one of two ways. One is a spontaneous method where I sit down at my computer with a tune in mind and see what comes out of my head. Often times I start with a simple riff, bassline, or drum loop and then build off of that into a full mix. This is usually the approach I take if I want to remix a specific theme, like for a project or something. Occasionally I will vary this up and start with the chorus or main melody of a song, but I find that this is usually not very effective and I prefer to start with a simple component of the original (like the bass line from Fei Long). The other way is when I actually have an idea of what I want beforehand. I'll just be listening to music and get the inspiration to do a remix of a theme, such as Kefka or the sewer from Chrono Trigger. I'll often have a very basic idea in mind for something I want to do with the mix - by NO means a full plan, this is usually only like 15-45 seconds or so of material - and then I basically do my best to put down what I have in my head. Once I began working with my idea, I usually come up with more ideas, which eventually lead to a full fledged mix. Regardless of what method I choose, it's within about 30-60 seconds of the song that I can tell whether I'd like to continue it and develop it or not.
  18. Oops, again, didn't know that the ends actually had separate sounds. Lemme just briefly explain the mic types in question for you, that way you can make a decision yourself based on what you want. In terms of condensers, you have varying diaphragm sizes. Small diaphragm condensers are usually used for stuff like cymbals with a lot of high frequency information. Large diaphragm condensers are common for vocals and other things that have a wider frequency range. Yours is a medium diaphragm so its pretty effective at both applications, I would wager. Dynamic mics that are moving-coil also have different size diaphragms, like condensers. However, due to the way that they operate, they don't respond as quickly as condensers do, and thus overall tend to produce a less detailed sound. On the other hand, they are typically capable of recording a wider range of dynamics than condensers - eg. loud stuff - and generally capture a full range of frequencies well. These are more or less generalizations. Individual mics have unique frequency responses. While I'm familiar with the theory behind this sort of thing, I readily admit I have not had a lot of field experience, so I can't give specific advice on what mics to use for what applications beyond what I've studied. I haven't really encountered the type of drum you are describing before, but basically you're saying that one side has a strong fundamental with lots of harmonics. With that kind of odd nature, I really don't know what would work better in this case.
  19. I don't see how compression emphasizes dominant frequencies any more than simply turning up a volume knob would emphasize dominant frequencies, as it's just affecting dynamics. Maybe I'm missing something. This just seems a bit misleading.
  20. Just to clarify, my suggestion was specifically for the double sided drum. For the tabla, you could mic it as you would a snare or tom. Ok, so this thing is harmonically rich. I didn't know that. Typically, when recording stuff that is rich, people tend to like condensers. Dynamic mics are a little more suited for things where the definition of the sound is less important than its "power" (such as an electric guitar or a snare drum), or for things that are very loud (again, like drums). The M-Audio pulsar appears to be a condenser, which I think would be good for this kind of thing. You're not trying to capture raw trasient energy like a snare - there is detail that you want as well. So, I would imagine this mic would be adequate for that. If you are going to go ahead with using two mics, it would be best to get another mic of the same model, but any similarly constructed condenser would work as well.
  21. Oh wow, this is really cool. I could make some nitpicks here about a few production aspects (mids-highs are a bit muddy at times, reverb/delay is sometimes excessive, and the wind instrument doesn't sound so hot), but for the most part, the synths are well-designed and executed, the beats are hot, and I LOVE THAT 3/4 CHANGEUP!!! Very creative! The switch to 'hardcore' all of a sudden is also unexpected, but welcome. The arrangement factor is most definitely there given the simplicity of the original. I've heard a lot of electronic mixes and I can safely say this is one of the most unique and well-made ones of the lot. Great job. YES
  22. It is probably an issue with the physical configuration of your recording setup. You'd be hard pressed to find recording software that automatically forces you to record in mono. Here are some possible problems; * The mic you are using only records in mono * The input on your soundcard is mono (unlikely, but possible) * The cable you are using only transmits mono Without more information about your recording setup, including type of cable, type of mic, type of soundcard, and so on, it is difficult to give a more specific answer.
×
×
  • Create New...