Jump to content

rig1015

Contributors
  • Posts

    221
  • Joined

  • Last visited

Everything posted by rig1015

  1. Title idea: A Chozian Legend Because the songs sort has a mythical story telling feel to it. I had a question / concern; when you filter the drums line (cause there are moments when it sounds like you do [~0:24 till ~0:44 is the first moment]) is there any way to either make that effect a more noticeable effect or rather deliberate sounding? IMO the way you have it filtering now sounds like an engineering accident. Ideas: maybe add a flanger on top of the filtering moments to add a washing sound to it, or run a low cut filter as well so we get a more "old-time radio" effect. Just a polishing suggestion / ideas I'm trying to throw at you. I totally love the clavi-tone / harpsichord instrument you use. And awesome job on the choir (at least making sound like the Prime choirs).
  2. Sweendrix, I love your kick back strumming that you find a way to work into your tracks. Took a good listen. I like the psych evaluation or interview in the beginning. As for your questions about compression; It sounds more to me like you need EQing than compression. You mid-hi's / hi's sound overbearing. I D/L your mix and played with EQ in iTunes (yes, yes, this is a simple EQ but it is a quick and easy one to see if EQing will help). I boosted your 500Hz +3dB, dropped your 4KHz -3dB and curved the other bands around those tonals- This made the whole mix less resonate, easier on the ears, and brought out the dynamic range a bit more. You don't have to do this EQing on your master fader (Output 1-2) but I prefered the way it made it sound. I only suggest EQing because it doesn't sound like everything has its own place in your mix. Cymbals sharing freq's with the guitar harmonics, samples and synths arguing for a 3KHz tonal, not bad, just needs love. If you want to compress remember that as you drop threshold and increase ratio your mid-low mids will start to gather acoustic pressure in the mix. You don't want to clean up definition only to muddy up the 200-500Hz area. What kind of compression hardware / software do you have?
  3. It was just an artistic preference. Besides what if I want to play or perform it? It was just an idea to make the overall track more workable for others, esp. DJ's. I totally understand why people wouldn't make their remix track like that. It's just a bummer.
  4. Here is a good room calculator to help calculate standing waveforms. It should help with the calculating for the treating process.
  5. This sounds really cool! I love what you were able to do. Only things that pulled on me artistically were the samples and the constant hi octave synths. Vocal Samples: IMO do more with them, make them a bigger part of the song or cut them out. If you want to keep them try vocoding, scatching, studdering... those sort of reworks. The pitch shift on the sample "Welcome to-" in the middle of the track is moving in the right direction, more of that kind of stuff. But be careful because you don't want to have them irritate the listener. Hi-Octave Synths: The just started to bug after awhile. Is there any way to add a kind of mid-ranged instrument to assist? Fade-Out Ending: DJ's hate fades. This track could be performed. Is there anyway you could just have the song give us a hard kick hit with a nice reverbed tail as your layers just echo off? I can hear your synths finish there notes before you complete the fade, so you could put it there? I know OCR places time restraints on us so I don't wanna mess with the way your song length is. All an All this track is pretty awesome, I could keep it and love it as-is. Good job. Hope when you sub they take it.
  6. EQing is one of those things that separate the $100 an hour engineers, from the $20 an hour ones. When I was taught about EQ-ing I was given guide-lines for various types of instruments but in the end was told that it comes down to enhancement. Does the EQ-ing that you are doing enhance the mix or hurt it? One thing you can try is turning the monitor volume way down (like the song is a whisper in the background) and try hearing what sticks out. You know like EQ-ing is making the wrong instruments stand out at this low dB. The mix engineer who I learned form also pushed the ideas of an even spread when you do this, as the volume (dB) drops everything should fall away evenly. I do like the non-EQ'd version over the EQ'd though.
  7. I have a 15" Mac Book Pro with Logic Pro 7: Get the Mac. Buy as much as you can in memory and HD space, those are the ONLY areas I've had to upgrade so far and I'm going on my 4th year. Logic Studio probably the best out-of-the-box-install-sit-down-and-go software out there (esp. for Mac). Recommend you buy a full version of it. If you want to go Garage Band or Express that is fine too, not as much usability as Logic, but when you are ready to upgrade (if ever) you can buy Logic and ALL your session will still work with everything the way you set it up in Express / Garage Band. Also remember Boot Camp. If your not ready to leave the wonderful world of Microsoft Windows you can also partition boot camp for a Windows side to your MacBook. Now you can boot into windows and use all those VST plugs that you've grown so attached to . I use it. Works fine, I can even run ProTools on the windows side with full alotment. Edit: Memory and HD space are like the only things you can upgrade on a laptop, derr.
  8. It shouldn't cost you anything to treat your room; just time, measurements, and math. Bass Trap - You can make one with wood from homedepot. I just has to have baffles to help diffuse sound. (And I don't think you need one unless your room is already very boomy, but I won't say for sure) ELC sweeps can be done on a keyboard or synth. White Noise or Sine waves sweep through the 20-20 (20Hz - 20KHz, range of human hearing) out of your sound system (speakers) while a stand alone mic (rent one) records all the sounds your monitors (speakers) make in the room. Run the mic input through some spectral analyser and see how bad you bass humps are for the room. Adjust bass. Repeat process, see how bad bass humps are now. Adjust bass. Etc. until happy. But remember, your not buying a $2,000+ SKS sound system, so treat only if you want to geek out and go through the process of acoustical engineering. It can be an interesting learning process. Took me 3months of measuring and treating before I was happy with the studio room I mix in, but I learned SO much about acoustics. It is just a suggestion, it will take some time and effort on your part but you come away an over-all stronger engineer. Besides Logictech really tries to compensate for any un-uniform frequency response with their FDD2 technology.
  9. The biggest problem I run into with sub wookfers are mid-low to low range saturation. You can Q it out with plugs (Riggs screams no because now your letting ANOTHER EQ mess with your mix). You can knock it out at the actual subwoofer and play with the gains on it. -cut & dry- Amatuer suggestion: Tune your ears Get something you mixed (because you know it, and you know how it should sound). Listen on your new monitors. Observe difference. Adjust all settings for the difference. Pro suggestion: Tune the room Set up a mic in the middle of your room and do an ELC sweep with white noise or sine tones and see how your room "sounds." Then you'll know which freqs are being over pushed from the sub verses your satelites (monitors). Now check the curves on your owners manual and adjust at the sub to a more balanced ELC system.
  10. Sounds like you're unhappy with what you can crank out with the work-flow you've established... that sucks... I hate feeling that way . Most DAW's are 3ft long and about 100 Miles deep; easy to learn, long time to master. If you want to build your FX's up from the ground you're probably gonna need to run around with the mic and do some Foley artist work. If the game is sorta Sci-Fi you can probably get away with murder for synthesizing sounds. I would try running your sounds through a sampler (any ADSR gate equivalent) and messing with the values... it might be really cool what happens when a pencil click is sustained for 4 secs... then pitch that up/down 5 semitones you might have something cool... or not.
  11. Interesting... Here I've only been using soundflower to do audio caps while doing screen caps with my DAW.
  12. Here is a pictoral how to. This is WAY Logic 101, but I don't know what you know so I'll start with basics. Step #1 Logic is open and your autoload seesion is showing. Arrange window. Click and hold the empty button just above the outputs, so you see this. This will load Logic Pro's drum sequencer UltraBeat. Step #2 Back in the arrange window use your pencil tool (Hot Key: ESC) to draw an empty region on your track. Step #3 Double click the region to open the MIDI Region Window. Using the pencil tool in the MIDI Region Window draw notes. This can be time consuming so most artists/engineers use a control surface (I tend to use either my MPC or my Oxygen 8. You've now created (using a sample library) a drum part. The only other things are adding inserts (Dynamic based effects) or sends (time based effects). If you're looking to get a quick D'n'B sound you can add distortion plugs, I recommend bitcrusher as an insert. Like I said, this is a 101. But I hope it helps. Question: How do you prefer to do drums; as a sample library plug-in on an instrument track, or as rendered audio files on an audio track? What is your preferred work method?
  13. That should be awesome then, link it up when you're happy with the product, love to hear it.
  14. Sorry I fell of the grid... getting ready for Comic Con... its gonna happen. The harmonizer: I was just thinking like a Digi-Tech Harmonizer (~$599), it is what a lot of artist are using now-a-days to do unique vocal effects. It's pricey I know, but it is Pro-end gear. You don't have to buy it, you can rent it (like probably $20 a day) and do your recording then have the sound you want with the gear you want. The Track: Your track is now in the "minor touches" phase, because you basically got all the arrangement down all the elements are coming together and the overall sound of the track is designed. EQ the flutes @ around 2KHz a bit (notch EQ with a very narrow Q) to reduce resonance. Cool what you did with the voice, a little loud in the mix... just a little, but good job on the effect, I like it. After the break down (1:45 - 2:02) While I'm understanding with the ambient music here, others might not take a liking to it. It might come across as a "loss of energy" in the piece. IMO; cut those 17-ish seconds out and move them to the end after your vocal up-bend for a fade out ending (even though I like the artistic touch of the bend as the ending). Then fill those empty seconds with the music (guitars and such) that come in at 2:03; extend it, add to it, whatever works. IMO. Snares, I wanted a larger snare sound... like a marching-band-rolling-snare-sound. They just sounded thin to me. I do love the over all musicality of the track. I only had a chance to listen on HP so when I get a chance to hear it on monitors I'll PM you with some mixing pointers (if need be).
  15. Every detail helps! Okay first off if you want I can take a swing at your vocal stems. Some NR, Gating, compression, EQ'ing, Delay, and then reverb, might be just what you need. But if you want to learn and try on your own try digital/analog: For digital NR I use the Waves Z-Noise (worth buying if your into restoration and what not), for Analog NR I take white noise from the console out of phase and EQ it into the right harmonics to help (and thats IF we have hiss, normally we get all inputs at an SNR of -80dB so we don't care). The first question is how are you getting your voice into the computer? What's your proceedure? And don't worry, I've worked with artists who've sung at their laptops internal mic and we still got a Moby like sound out of them. New probs? like what? And don't push the harmonies idea to hard at the expense of the song, and remember I meant a harmonizer VST/RTAS plug-in with MIDI. Timbre, like ... you know how No One can be the voice of Vadar except James Earl Jones... thats because he's got the only "timbre" (Tam - ber) that we'd recognize, that deep, bubbling roar, ya'know? You have a timbre that sounds (already) like a harmonizer plugin that's why I wanted you to use one, because I wanted to hear what would've sounded like. I once used one to make my MAC sing in a performance... people thought it was a vocalist. It's getting harder and harder to tell now-a-days because of what we can do to the voice. We'll work on the signal flow and NR first and then we'll talk performance improvement, the first EQ is at the mic.
  16. I liked the distorted guitars at the near the ending of the track. A part of me was torn because I wanted the dist. guitars to be more reverbed and distant (but still present), and a part of me liked them as they were. I felt the drums could use more break down and dynamic feel, but then I was listening for more of that, so I could've had expectations outside the realm of the composition. I hope the fade out at the very end is more gradual on the sub'd version, seems a bit harsh, but that could be the stream coding cutting the "quiet out." I really like it and think it'll stand up well.
  17. Idea: Twist & distort the "Kweek!" sound fx from the game.
  18. Bummer dude, they killed your links....
  19. Noise... like the hiss from the mic when you start recording? If you want you can take a NR plug to your vocal stems (this WILL mess with your high end harmonics (5KHz-10KHz+). Tell me what software you're using and see if I can't figure out a better solution for you if you don't mind electronically "treating" your voice. There are lots of ways to get rid of noise, all the "pro's" really do is turn the noise out of phase. I was also talking about a harmonizer plug-in, like the Eventide H910 or something like that on your voice. Your timbre is just right. Look forward to the update.
  20. It is coming along; Notes- Your voice is a little over bearing, bring your voice -3dB (maybe, if it sound too faraway, cancel) and boost the 3KHz on your voice about +3dB (3KHz is the Vox magic frequency, I've seen a lot of mastering engineers use it on the voice to help bring it out in the mix). Also for the voice maybe use a harmonizer plug-in. A lot of people have mixed feelings about using one on ones voice but I'm thinking to deliberately give you a " "-like sound, you have a really good sounding voice for it- like Jan Johnson did/does for Trance.I also felt that the accompaniment was a little cheezy. Like listening to MIDI karaoke, ya' know? Keep going. Bring a more structured arrangement, love to help make this a finished product.
  21. Wow ***wipes tear from eyelid*** never thought that THAT CC track would mash-up well with Linkin. Awesome.
  22. If you'd like some good examples of good sound texturing check out Chicane, I recommend his track " " good soundscapes and tonal textures. I haven't heard your track so I'm kinda guessing about the critique... Give a link to your track? See if we can't help troubleshoot some more?
  23. Naw! don't do that convert it into a "weird" mic for your collection... if you are still getting signal from it try doing some radical EQing coming on the board and just see what sounds and freqs you can get out of it. If you can get anything cool maybe you could use it? When my 1st SM58 busted it lost ALL high end... like LP Filtered at 1KHz. So when I tweaked and pushed I found that even though I can't use it for clean vox anymore I can still push the diaphram really hard and get a cool singing distorted voice guitar sound. Cool+Cool=2Cool Cool+Crap=Crap Crap+Crap=Interesting (possibly)
×
×
  • Create New...