-
Posts
2,040 -
Joined
-
Last visited
-
Days Won
3
Content Type
Articles
Profiles
Forums
Events
Everything posted by Moseph
-
My ER calculations are based on the dimensions of an orchestra shell and the speed of sound (~1,100 feet/sec). I think the dimensions I was assuming for the orchestra shell were something like 40 feet wide x 30 feet deep. You can do some Googling for concert hall dimensions and read specs for various halls if you're really interested in that aspect of it. The important thing to remember is that the closer an instrument is to the listener, the longer it will take for the ER vs. the direct signal to reach the listener, and the more muffled the ER will be (more on that in a moment). As far as predelay for back-of-the-shell ER goes, my values are 57ms for strings, 42ms for woodwinds, 28ms for brass, and 12ms for percussion. For side-of-the-shell ERs, I did some fudging and concluded that a 9ms predelay worked for all sections. That's less realistic/precise than the back of the hall calculations (shells are typically wider in the front than in the back, for example, and the sections' distances to the sides won't technically all be the same), but it seems to work okay. Reverberate is a convolution reverb that gives some special controls for ER and includes separate impulse responses for ERs -- so you can have just the ER without the tail. If you happen to have Reverberate and are interested in my specific settings, I can link you to my presets. Basically, I have the size set as close as I can to the shell size with the predelay values listed above, and then adjusted other settings based mostly on what seemed to sound good. The ERs from the sections closest to the listener travel back through the orchestra, hit the back wall, then travel forward through the orchestra to the listener. The ERs from the sections farthest from the listener immediately hit the back wall, then travel though the orchestra only once to reach the listener. Because of this, the ERs from the closest sections should have more of the high frequencies rolled off than those from the farthest sections. Conversely, the direct signal from a closer section doesn't travel through the orchestra at all, whereas a farther section's direct signal does travel through the orchestra. Because of this, the direct signal from a far section will have more of the high frequencies rolled off than the direct signal from a close section. In short, the high frequency content of a section's direct signal vs. its (back-of-the-hall) ER should be inversely proportional. A close section's direct signal is less rolled-off than other sections' direct signals, but its ER is more rolled-off than other sections' ERs. A far section's direct signal is more rolled-off than other sections' direct signals, but its ER is less rolled off than other sections' ERs. Here's an example track with the reverb discussed in my previous post. It's all VSL SE Plus, except for percussion, which is a grab bag of stuff. The reverb is arguably a little wet -- a matter of taste, I think -- but if I were to change that, it would be by lowering bus levels and not by deleting reverbs. I may also be fudging percussion positioning some -- in a live orchestra, percussion is placed behind the orchestra, but in film score stuff, percussion is usually much more forward-positioned, and I don't remember specifically what I did in this track.
-
Turns out I have seven reverbs, not six. I'd forgotten about number 5 below, because I don't think I always use that one. Reverb 1: The first effect on each track is a kind of odd thing where I expand the signal, then add a little bit of reverb with a short tail on it, then compress the signal. I find this adds some body to the sound and helps glue the sample transitions together. This uses Breverb 2. Reverb 2: Each individual track is routed to a submaster for its orchestra section (woodwinds, brass, strings, percussion), and each of these sections is bused to an early reflection similation (Reverberate). This models the early reflections coming from the back of the hall. The size is roughly calculated based on the size of an orchestra shell like you would find in a concert hall. Each section needs different settings, so there are a total of four buses for this -- one for each section. Reverb 3: Each section is also bused to a second early reflection sim that models reflections from the sides of the orchestra shell. Again, I use Reverberate here. All sections go to the same bus for this. Reverb 4: Each section, plus its rear ER, plus its side ER, is bused to a reverb tail. I'm using Reflektor here. I have about a 70ms predelay on this, because I don't want Reflektor to interfere with my ERs, which happen in those 70ms. Reverb 5: The Reflektor output gets run through a Breverb 2 instance that adds some length and density to the tail. Reverb 6: All orchestra sections, ERs, and the reverb tail get routed to a main orchestra submaster. Here, I have an instance of Breverb 2 (Orchestral Beef-up preset) doing some very subtle, well, beefing-up. It adds a bit of density. Reverb 7: The orchestra submaster is bused to a bus that I call Warmth. It contains a plugin chain that resembles the one from reverb 1 above. Among other things, there is again a Breverb 2 instance between an expander and a compressor that fuddles the sound slightly. So generally speaking, I use Breverb 2 for nuance, Reverberate for ERs, and Reflektor or some other convolution unit for tails.
-
My current production chain (which I hope to simplify, since I'm in the process of changing most of my core orchestra samples to a wet library rather than my current dry one) runs each track through something like six reverbs. It's so complicated because I'm doing some subtle sound processing stuff with some of the verbs and also modeling early reflections from both the back and the sides of the hall separately from the tail. I'll post a breakdown of the reverb chain when I get the time. But yeah, it's definitely not a matter of just waking up one day and going, "USE ALL THE REVERBS!" You keep adding stuff to the chain because you have a particular sound in mind that you haven't been able to get with fewer reverbs. In my case, I've just been generally unsatisfied with how VSL samples sound even with really expensive reverbs (including even the demos I've heard of VSL's MIR, which is designed specifically to work well with VSL samples). VSL samples always sound thin, and clean, and like they're floating on top of a separate layer of reverb that isn't connected to the dry sound. So my ridiculous reverb chain is my attempt to address that, and I've been pretty happy with the results. (On a slightly unrelated note, it looks like the template I'm putting together with the EWQL Hollywood series, though it may not need six reverbs, is going to have 400+ MIDI tracks and a couple hundred audio tracks. My projects are doomed to be absurd no matter what I do.)
-
You'd need to read its license agreement to be absolutely certain, but I've never heard of a commercially-available VST/sample library that prohibits commercial use (except for educational or not-for-resale versions). There are occasionally minor restrictions on exactly how they can be used commercially (e.g. must be used in a musical work in combination with two or more other instruments), but not blanket prohibitions.
-
Like normal earplugs, they still make you sound weird to yourself when you talk, but other than that, the effect is extremely natural. I generally wear them in movie theaters. This is the brand that I have: http://www.amazon.com/Hearos-Earplugs-Fidelity-Series-1-Pair/dp/B000V9PKZA/ref=sr_1_4?ie=UTF8&qid=1388524072&sr=8-4
-
I use something similar to these, and they're very good. The attenuation is pretty much the same across the entire frequency spectrum, so they don't distort sounds -- they just reduce loudness. These in the link don't give a noise reduction rating, but if I recall the specs on mine, you're probably looking at ~20 dB compared to maybe ~30 dB for normal foam earplugs.
-
NB: I don't use Logic, so what I'm telling you is based on Googling and not on my actual experience. If I understand how you're routing things, what you're attempting to do can be done in a more straightforward way that may solve the problem. The external MIDI tracks and the instrument instance that you're routing them to are intended to be used to feed MIDI data to a hardware unit such as a physical MIDI keyboard with onboard sounds. You're sending the data to a software unit, PLAY, and the normal way to route multiple MIDI tracks to a software unit is to use software instrument tracks rather than external MIDI tracks. I think you should be able to ctrl-click/right-click each of your external MIDI track names and choose Reassign Track > Mixer > Software Instrument > (Whatever PLAY is called). (See user's manual section: http://documentation.apple.com/en/logicpro/usermanual/index.html#chapter=9%26section=10 under the heading To reassign a track to a specific channel strip) Make sure the tracks' channels are set correctly and match what PLAY expects, and I think things should work. If if doesn't make things work, you could try creating new software instrument tracks. In the New Tracks dialog box, select Software Instrument, check the multi-timbral option, and tell it how many tracks, and it will create that number of tracks on ascending MIDI channels all routed to the same software device. (See manual section: http://documentation.apple.com/en/logicpro/usermanual/index.html#chapter=9%26section=4 under the heading Software Instrument Tracks) Anyhow, the basic thrust of this is that multiple software instrument tracks can all be routed directly to a single use of PLAY. If the above info doesn't work, Google around for more detailed info on how to route software instrument tracks. You should not have to use external MIDI tracks at all.
-
It sounds like something's MIDI input may be incorrectly set to all or omni, which would make it accept MIDI input from all devices/channels rather than just a single channel. I'm not familiar with how Logic's MIDI instances work (which seems to be what is referenced in the SOS article), but my first guess is that PLAY is not set up properly. In the Browser view of PLAY, make sure that each loaded instrument (in the upper left pane) is set to a different individual MIDI channel and not to omni. Then, for each loaded instrument, go into the Player view and make sure that Channel (in the upper left below MIDI Port) is set to the same channel you used for the instrument in the Browser view.
-
The Music Software Deals thread
Moseph replied to big giant circles's topic in Music Composition & Production
How are you liking Action Strings? I have Komplete 8 Ultimate, and Action Strings is the primary reason I'm interested in upgrading to 9. Rapid repetitions are so hard to do well with normal string libraries. -
The Music Software Deals thread
Moseph replied to big giant circles's topic in Music Composition & Production
I guess it's worth mentioning that Native Instruments is still running their 50% off almost everything sale (excluding Komplete/Ultimate, where 50% off only applies to the upgrades). Lasts through Dec. 9th. -
I, for one, would like to see a return to Monkey Island's Dial-A-Pirate copy-protection. EastWest, are you listening? I expect that the price of iLoks is a repercussion of their foothold in the professional market. $50 is much less of a big deal when you have thousands of dollars worth of iLok'd software rather than just $100. It's the hobbyist who gets the most screwed here, unfortunately.
-
I'm mildly in favor of hardware keys at this point, but I understand where HoboKa's coming from since that was the stance I took six or seven years ago. Obviously, I love software that doesn't have to be authenticated at all (like most Kontakt libraries). At this point, though, I'd sooner have things registered to an iLok than authenticated some other way -- at least then I know at a glance how the license works: install wherever I want, and just move the key around. With other forms of authentication, I have to deal with re-authenticating when I install on a new computer, and deleting things off of old computers so as not to be in violation of install-on-only-two-systems clauses, and remembering login details for online accounts tied to the software, and searching my email archives for serial numbers, and so forth. It's just potentially a huge pain in the ass to have to deal with this for all of my software when moving to a new system, and the iLok setup circumvents that particular inconvenience entirely. Six years ago when I had very little audio software to worry about reauthorizing, that point would never have occurred to me, but now it strikes me as a pretty good argument in favor of hardware keys. Recognizing this has shifted my view from seeing hardware keys as an inconvenience to seeing licenses in general as an inconvenience to which hardware keys are one of several approaches. You still have to deal with a license in a non-iLok situation. Without an iLok, you have to notify the software vendor (that is, re-authenticate) when you replace or substantially modify your computer system, or even uninstall and reinstall. With an iLok, you negotiate the license once and are given a physical object that represents the license, and nobody cares what you do to your computer. Why get bent out of shape about the hardware key? Rather, why not be annoyed that without a hardware key you have to get permission from the vendor to reinstall software that you've been legitimately using for years? To look at it another way, if some company offered to consolidate and manage all of my software license agreements so I wouldn't have to screw around with them beyond the first authentication, I'd think a one-time payment of $50 would be a pretty good deal. And that's basically what iLok is doing.
-
Actually, now that I think about it, I did recently pass up the opportunity to buy a plugin on sale that required an iLok2 because I have an iLok1 and didn't want to buy a new one just for a $50 plugin. The thing that's changed about my approach to hardware keys is that before, I would have been like, "stop trying to inconvenience me/screw me over, EastWest," and now I'm like, "I like you, EastWest, and if you feel a key is necessary for your business model then I'm willing to give you the benefit of the doubt because I want you to keep producing libraries."
-
AFAIK, iLok2 hasn't been cracked yet. Though there are only a handful of companies that require the second gen model; most let you use first gen as well. NI seems pretty committed to not locking down Kontakt, so they're a good company to support if you oppose hardware keys. I used to be super-opposed to hardware keys as well, which is the main reason I originally bought SONAR rather than Cubase years and years ago. But then I needed an orchestral library, and the only real options at that point were Vienna and EWQL, both of which require hardware keys. So I said "fine, then" and got Vienna and never had any problem with the key.
-
If you're into tinkering and detailed audio editing, you'll have a blast with the library. When you use the wordbuilder, you get a bunch of tiny soundclips that you do mini multitrack mixes of to get words. To get good results, you have to both be really patient/meticulous and have a very clear understanding of what a real choir sounds like -- experience singing in a choir helps a lot. I find I have to do a lot of manual blending of vowels to get decent diction. You have to keep in mind that in addition to being generally difficult for listeners to understand clearly, real choirs don't pronounce words the same way we do when we speak normally. A lot of the problems I've heard in other people's use of Symphonic Choirs come from them having the choir pronounce things in ways that would drive a choral director mad. A lot of the complaints about Symphonic Choirs in the past focused on the fact that it used a MIDI plugin for the wordbuilder, which meant that people using DAWs without MIDI plugin support had to route the wordbuilder into their DAWs with a MIDI yoke/loopback setup. This has been changed (within the past couple years, I think), and the wordbuilder now runs in PLAY as part of the Symphonic Choirs instance, which makes setup much more straightforward. And last time I checked, the demos on the product page were terrible. The library is capable of much better than that.
-
What to do when offered a music related job.
Moseph replied to Esperado's topic in Music Composition & Production
Make sure both you and the person you're negotiating with are clear on who will own the rights to the music. If possible, and especially if they're not paying you a lot, you want to retain complete ownership of the music so you can use it for other things/sell it yourself/whatever. -
Can Drivers affect an Exported MP3?
Moseph replied to SonicSynthesis's topic in Music Composition & Production
Those audio enhancements are hit-or-miss even with professionally-produced stuff. The best way to evaluate the quality of your mix is to compare it to a similar reference track that you know is mixed well. -
The Music Software Deals thread
Moseph replied to big giant circles's topic in Music Composition & Production
But at least soundware doesn't clutter up your house. I just discovered that SONAR's REX player has the ability to send its MIDI-mapped REX slice sequence to the piano roll as a sequence of MIDI notes for easy slice rearranging. I used to have so much fun doing that with REX loops in Reason and had no idea SONAR could do it. So now I, too, am getting the collection. The draw of a bunch of REX files to screw around with plus some really useful-looking sample libraries (tempo-sync'd cymbal swells, anyone?) is too much to resist. -
The Music Software Deals thread
Moseph replied to big giant circles's topic in Music Composition & Production
It's samples. -
Could be. Bands sometimes have strange ways of dividing writer credits on their music for PRO registration purposes that don't actually reflect the real authorship. (Like I know it's pretty common that a band will just mutually agree to credit all the band members on all songs regardless of whether they helped write them or not.) My guess would be that this isn't the case here, since Buxer is actually credited on a couple of the Jetzons songs, which suggests that someone was keeping track of what he was and wasn't involved in. Though if he is actually that hands-off about requesting writer credit, it could be that he just didn't monitor what was going on with the BMI registrations because -- especially for unreleased tracks -- he just didn't care that much.
-
Finale 2014 is out! And it's the first version since I started using the program in 2006 that actually makes me really want to upgrade. Backwards- and forwards-compatible saves! Intelligent handling of cross-layer accidentals and rests! I'm hardly even annoyed that you still have to custom-define beamed slashed grace notes. Also, the competitive upgrade from Sibelius is now $140, which is the same as an upgrade from a previous Finale version.
-
The ReMix review threads are, in fact, tagged with some musical info -- here's the tag cloud. That's the nearest thing OCR has to genre classification at present.
-
I write most things in Finale before ever opening my DAW. I'd do it by hand on staff paper, but it ends up being extremely time-consuming to get an easily-readable orchestral score that way (no copy/paste or part extraction), so when I work on paper, it's usually no more than melodic/harmonic sketches. The two main issues with working in a DAW for me are that I prefer to look at notation rather than piano roll data (and am already extremely comfortable in Finale, so using the DAW's score editor feels limiting) and when I try to write in a DAW, I usually get bogged down in tweaking the sound and messing with keyswitches to the point that the quality of my writing suffers. I find it works best for me to divide the tasks of composing, recording, and mixing with as little overlap as possible.