Jump to content

dannthr

Members
  • Posts

    1,127
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by dannthr

  1. Personally, I only feel like I hear the difference in the richness of the hall/space--which is important, but not something you can't away with not having. There is a kind of... shimmer or special richness in the 24-bit samples, at least to my ears, but it's a fairly undefinable quality, so I never felt like I missed out on that one. When given the option between 24-bit and 16-bit samples, I tend to go for the 16-bit so I can squeeze maximum performance out of my rig.
  2. So, Gold is good enough for folks looking to start out with something that sounds good out-of-the-box. It's pre-mixed for you, so a lot of the hard work is already done. It's been about 3+ years since I got Gold and I've moved on to what I believe are better and more expensive/expressive samples. But, looking at my templates now, I think that I would be using Gold more OFTEN and it would integrate more easily into my template if I had the CLOSE mic position samples. This was something they were talking about doing for a long time, a close mic'd version of Gold, but it didn't happen until just recently. Before then, you had to get Platinum to get the close mics.
  3. Have you tried importing it? What was the result? I bought the NS_Kit7 in Kontakt format, but when he disappeared off the face of the earth for weeks without mention of when he would return, why he was gone, or when we would get our product, I made a complaint--he returned the next day, told me to F off, and refunded my money basically telling me I would never own his drums. I figured a douche like that, his drums weren't worth it.
  4. Oh wow, when I was investigating Reverberate, Reverberate Core didn't exist. The Dual Convolution is REALLY important to have. On a totally dry signal, you're going to want Early Reflections as well as your IR tail.
  5. If Matlab was the optimal convolution, it would be adapted for use in the studio. That's like comparing Mental Ray and Unreal--Matlab was not designed for performance audio applications. By the way, if anyone is looking for a fair priced convolution reverb that does True Stereo Convolution (which means 4 audio channels, stereo IR for each channel), check out Reverberate for $50: http://www.liquidsonics.com/software_reverberate.htm If I didn't need Altiverb's stage placement algorithm, this is the convolution engine I would be using.
  6. I think what a lot of people forget about convolution reverb is that it is a sample based reverb. A convo reverb takes a sound source, like a wav file say, and then starts applying it across effected sound--each sample (so 44.1khz means applying that IR file across 44,100 samples a second). So you really want to be running stuff like ASIO, have plenty of RAM, have plenty of CPU, etc.
  7. My farm machine is a Phenom II 3.2Ghz 6-core machine with 16GB of 1333 RAM running Win 7x64. It only runs Cantabile, a VST host, and Kontakt, Altiverb, and any other VST I use. My DAW is on a separate machine. MIDI is sent via MIDIOverLAN and audio is returned via S/PDIF.
  8. My current project uses 5 instances of Altiverb Convolution--three of which use algorithmic stage placement. All of it is applied in real-time as I mix as I write.
  9. If you want TAIKOs, check out 9volt Audio's Stick Breakers Vol 2 Ten Man Taiko, and their TAIKO library, both great and complimentary libraries. They're also coming out with a Shime-Daiko library as well.
  10. Room nodes are serious and if you're mixing, so are early reflection points.
  11. Old to new... dead wips. Wrath of Thanatos: Death Secret of Mana remix of Thanatos' reveal and subsequent battle. Dawn of Chaos: Main Title Warcraft Orcs and Humans remix of Human 3 and Main Titles. Prisoners of Destiny Chrono Cross Prisoners of Fate remix. ...sigh... I am always slammed in the middle of something like this...
  12. If you want to talk about the history of epic action music, you have to go to Hans Zimmer who in all fairness really first defined the genre with Crimson Tide. Other examples of Hans Zimmer and his musical spawn would include Harry Gregson-Williams, Steve Jablonsky, Klaus Badelt, etc--mostly folks who've actually worked for Hansy. Some slightly off the mainstream inclusions would be Bear McCreary (Dark Void, Battlestar Galactica), Tan Dun (Hero), and John Debney (Lair, Cutthroat Island, Passion of the Christ). Some over-the-top epic epicness inclusions would be Thomas J Bergersen of Two Steps from Hell--just about anything he does is remarkable.
  13. First, start with normalizing. Start by normalizing to a peak dynamic and set that peak to -0dB. Listen to that. Then you might try out normalizing to an RMS average. Play around with different settings. A good normalizer will be able to scan and tell you the average and peak dynamics of any file, so open up your commercial reference and scan it, catch some of the numbers, and start matching to it. Also, remember that it's not always a good reference, though I understand what you're trying to do, it's a fine line. Even top commercial releases can clip--check out Pirates of the Carribean I OST. Clipping all over the place. You won't notice it on radio play, but you will on decent cans or speakers. If you feel like you can't get it "loud" enough overall, then you want to go back (normalization is destructive, so make sure you save before trying stuff out) and try some light compression first or a limiter/maximizer. Compression just narrows the dynamic bandwidth of your track. The danger is that you might have a bunch of instruments playing inside the same frequency range getting compressed together and combining to clip. So you might want to look into that if your compression isn't working. That's mixing and EQ, which you generally want to come before in your chain. If there are some instruments that are combining in a certain frequency range, that might be making your track get muddy and clippy in that range. Try carving out EQ space for the individual instruments, then compress, then normalize, etc.
  14. Well, at least they understand their libraries depreciate. Sonic Implants is rediculously overpriced for how old and out-dated it is.
  15. The thing you have to remember is that EWQLSO is like 5 years old.
  16. I prefer to just edit manually in tempo view--though I also do a lot of mouse work for my instrumental performances as I usually get my performance tweaks down as I write.
  17. That's what I do, but usually for the whole orchestra. Very important also on guitar strums, harp runs, etc. The highest level of achievement for a virtual performance is imperfection, while the highest level of achievement for a human performance is perfection--irony. PS: There is NO "too nit picky" when it comes to virtual instrument programming.
  18. In SONAR, I can draw curves and dips in the tempo map, makes accellerating and slowing easy as well as rubato style tempo where the tempo almost becomes a part of the performance expression.
  19. The surround mics have a rediculously long release tail, do each mic individually, mix them in post. Use the closest mic for writing.
  20. Vibrato tends to appear mostly in the higher velocities, additionally, boys choir has no vibrato, and mmm and oooh samples can blend nicely together with no vibrato.
  21. Writing for an instrument you can't personally play is a process of abstraction--you have to pull yourself back and objectively examine what your music is like on that instrument. Everything else is about psychology. Since you're thirsty for an example, I'll give you one, but it's not about playing piano, it's about another piece I was working on that featured a solo flute in an orchestra. I didn't have a flute, I didn't have an orchestra, just my computer and my trusty multi-sample virtual instruments. Up until the moment where the flute came into the piece, the strings were just playing a sustain and they were the only instruments playing a the time. When the flute came in, the basses and celli were supposed to come in with a low pizzicato note. What I imagined was happening was the conductor was holding a fermata awaiting the floutist's entrance into the piece. In my mind, I imagined there would be a breath of anticipation that slowed the flute entrance, the player was psyching themselves up, I imagined, to prepare for the solo, the conductor was watching the floutist in order to cue the rest of the instruments who were watching him. When the flute player came in, the conductor was just behind the flute player making the bass and celli pizzicati fall behind the beat. During the opening part of the solo, the conductor mostly followed the lead and expression of the floutist, thereby making the rest of the instruments late, but as more and more instruments started joining in with the accompaniment, the orchestra became more cohesive. The flute started blending into the ensemble and eventually everyone was following the conductor together by the end of the section. Humanization is about you treating your virtual instruments like little players. It is treating them like humans.
  22. Maybe it's just me, but it doesn't SOUND like a pianist wrote this piece. It sounds like it would be awkward to play. The process of humanization is often approached (ironic as it may seem) mechanically or automatically. Humanization means thinking about the imagined human element. How would a human play this piece? His imperfection reveals his humanity, yes, but there's more to it. How do they feel about the piece, what do they want to change about the piece, if it's a different person, maybe their goals with the piece aren't the same as yours. You really need to put yourself in the shoes of an imaginary person to really do this to full effect. There is a flow to the fingers when playing piano. When pianists write piano music, this is built in, this understanding of a finger flow, and it's more natural. Imagine the hands, what they are doing, how is this played by the left hand, how by the right? Think about the movement of the arms, the feet, the lean of the player as they move right or left. That's humanization.
  23. Check out vgmusic.com and see if you can score arrange out some of their MIDI files.
  24. Got clearance to show off some of the soundtrack for Dawntide from WAISoft: 01 - Dawntide Main Theme and Menu 02 - The Tilled Earth (feat. Jeff Ball on Violin) 03 - Down in the Square 04 - The Howling Wind 05 - Through the Twisted Branches 06 - Otherworld 07 - The Breeze in Our Sails (feat Hannes Frischat on Cello) 08 - A Flower in the Sand 09 - Only Seen by Torchlight 10 - To Trek in the Footsteps of Giants 11 - Of Fire and Stone 12 - As the Sea Runs Red (Epic Battle)
×
×
  • Create New...