Jump to content

Moseph

Members
  • Posts

    2,040
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by Moseph

  1. The amount of micro managing involved in getting detailed CC automation is a reflection of the fact that, unfortunately, most of the MIDI controllers that are available to people are terrible for recording nuanced CC performances. CC control is ideally better than velocity control, but it needs to respond like velocity control in a note-sensitive way in order realize this ideal. The big issue is that most controllers can't tie their CC data streams to their note data in any meaningful way because note control and CC control are conceptualized as completely separate processes that don't interact with each other. There are solutions to this problem such as aftertouch and/or custom scripting (I use the latter), but if you're stuck with the usual keyboard + modwheel setup, getting detailed CC recordings that respond meaningfully to note boundaries is virtually impossible without a lot of after-the-fact editing.

  2. Pd is amazing for doing MIDI controller hacks.

    I have a patch that generates MIDI output by interpreting pen gestures on a graphics tablet as violin bowing. It automatically steps through notes from a predefined MIDI file (which I generate from my Finale score), so I don't have to bother with pitch input at all when recording and can focus on rhythm and expression. It's the best MIDI controller I've ever used, and because I programmed it, it can evolve as my needs evolve. I'm in the process of expanding it so that shaking a Wii controller will generate frequency and depth to base vibrato on.

    I also use Pd to do some user interface stuff for an overhaul of SONAR's MIDI editing interface that I've scripted with SONAR's CAL script. Basically, CAL outputs data about SONAR to Pd via MIDI, and I can work with that data in Pd and then send the results back to SONAR by writing a file to disk and then reading that file with CAL.

  3. I need a DAW with robust scripting capabilities (beyond just basic macros), because that's the only way I can set up a MIDI editing workflow that I like. Sonar works for me in that regard with CAL script, but only just barely -- CAL is slow, buggy, and outdated. I'm looking into the possibility of moving over to Reaper because its scripting looks better.

  4. Also worth noting: Music Creator 6 uses the same project file format as SONAR, so if you upgrade to SONAR later your projects will still be compatible.

    EDIT:

    Regarding the easy-peasy-make-music-just-like-the-pros-do language used in Music Creator 6's marketing blurbs: Music Creator 6 is an attempt to capture the casual market segment that doesn't really know anything about DAWs. Cakewalk's in a kind of weird place in the DAW scene where they're using what is at its core the same program to compete in both the high-end Cubase/Logic/Pro Tools market and the bargain-basement entry-level market. Which is nothing new, I guess, since there have always been multiple versions of a lot of DAWs, but I've never seen such a huge difference in the focus of the ad copy at the various levels as with Cakewalk's stuff.

  5. Music Creator 6 is a slimmed-down version of SONAR X3. I haven't used Music Creator 6, but I use SONAR X3 and like it. It looks like the main differences between Music Creator 6 and SONAR X3 are that the former doesn't come with as many instruments/effects, has a track limit (32 audio, 128 MIDI, 8 buses), is 32-bit only rather than both 32- and 64-bit, and doesn't support VST3 (which probably isn't a problem for you). More complete comparison here: http://steamcommunity.com/sharedfiles/filedetails/?id=205802602

    It's probably worth $12.49; just be aware that the included virtual instruments and sounds are pretty limited, so you're likely going to have buy your own VSTs and/or look for free stuff to download to expand your sonic palette.

    Definitely don't buy it after it goes back up to full price ($50), though, because the entry-level version of SONAR X3 is only $60 on Steam.

  6. What specifically did you do when you hooked up all of the connections? It sounds to me like something or perhaps multiple things are not being powered properly. Getting a request for boot media sounds like the OS drive is not getting power; not getting a BIOS at all sounds like the motherboard is not getting power. Also, are you positive the BIOS is not running? If the video card were not getting power, you would get a black screen even if the BIOS were running.

  7. The overall mix is very mid-range heavy, which I think is because, as Vinnie mentioned, there's just not a lot of bass going on. Bass guitar and kick drum are, I think, the biggest problem areas.

    For the kick, you might want to start by just raising the level. If the kick track clips and it's still not loud enough, put a compressor and/or limiter on the track so you can boost the level even more. Once it's loud enough to hear clearly, you may have to do some EQing to get it to sit well in the mix. Typically, you'll want to emphasize ~100 Hz for oomph and ~3000 Hz if you want beater noise. I'd recommend googling mixing kick drum and doing a lot of reading.

    For the bass guitar, it just sounds like you need more low end. I'd start by bringing the level up (maybe +6 dB) and experiment with EQing down some of the midrange, maybe 400-900 Hz.

    EDIT: Also, what model of headphones are you using? It's possible that they're coloring the sound in a way that obscures the problems in the mix.

  8. When I import it to Finale, I get three tracks plus the percussion track (which is on track 10). When I import it to Sonar, I get four tracks plus the percussion track (still on track 10). The extra track, which is on channel 4 and thus is imported as track 4 consists of a bunch of notes on the same note as the percussion track with a velocity of 64 and a duration of 0 in a rhythm that resembles the percussion track but is not identical.

    If I re-export the MIDI from Sonar, the mysterious track 4 then shows up when I import the new file to Finale.

    I'm guessing it's a problem with whatever program did the conversion to MIDI. Anyway, I've uploaded both type 0 and type 1 exports of the file from Sonar -- see if one of these reads properly for you.

    Sonar export - type 0

    Sonar export - type 1

  9. Yes I found it now! unfortunately...

    I have tried importing a different Midi file and it shows that I can import: 'All Tracks' or "track 01', 'track 02' etc... Just as you have suggested. But with this elusive Midi file I'm trying to work with the FL Studio only shows 'All Tracks' or 'Track 0' (Both give the same result). I think that the rather weird behavior coming from these files are given to the fact that they are .vgm files converted into .mid. Still I wish I understood why some programs don't recognize it as they would a normal midi file.

    It sounds like the MIDI file may be type 0, where everything is saved into a single track within the file rather than split into multiple tracks. Why this would affect importing the drums, though, I'm not sure. The file distinguishes internally between tracks by putting things on different channels within the same track, so make certain that when you're importing, you're importing all channels (especially channel 10, which is where the drums probably are).

  10. I've not seen much of this sort of thing available, probably because composers tend to be so protective of their work. I've been thinking for a while now about doing an orchestral ReMix with EWQLSO and writing up a walkthrough on the creation process that includes downloadable project files, but that likely won't happen in the immediate future. Unfortunately, my use of EWQLSO thus far has been limited to mixing bits of its percussion with other libraries. I don't have any existing project files that make substantial use of the library; otherwise, I'd offer them to you.

  11. No, you don't need an additional MIDI cable. The only use for an additional MIDI cable would be if you need to send MIDI data from the computer back to the keyboard (to trigger the keyboard's onboard sounds).

    Your DAW will likely have latency settings that you can adjust, depending on how your audio drivers are set up. These will probably be in the program options/preferences and will likely affect the lag between the keypress and the resultant sound. Set the latency as low as you can get away with -- you'll know it's set too low when the audio starts glitching.

    Another possibility, if your DAW has it, is to turn off automatic delay compensation. This is a feature that syncs all plugins to each other, but it can introduce lag and you generally don't need it when you're recording/playing.

    I'd also recommend downloading the newest drivers from M-Audio and installing those. It may or may not reduce the input latency, but you're almost certainly better off using M-Audio's official drivers rather than whatever the operating system picked out.

  12. Can you post an example of the sound you're getting with the 24-bit samples vs. the 16-bit? I, too, would not expect a big enough difference that you would have to change your approach to mixing, so there could be something going on with the DAW and/or sampler.

    Shot in the dark here -- you upgraded EWQLSO from Gold, right? Gold includes only one mic perspective, whereas Platinum, in addition to being 24-bit, has three mic perspectives. It's possible that the increased wetness/heaviness you're hearing comes from the extra two mic positions (which can be toggled in PLAY).

  13. In terms of instruments, a typical smallish orchestra will likely have at a minimum:

    (woodwinds)

    2 flutes

    2 oboes

    2 clarinets

    2 bassoons

    (brass)

    4 French horns

    2 trumpets

    3 trombones

    1 tuba

    (strings)

    first violin section

    second violin section

    viola section

    cello section

    bass section

    (plus some assorted percussion and maybe a harp and piano)

    For the strings, each section contains a bunch of the instrument in question, but each of these instruments in the section typically plays the exact same notes. Normally, then, you will use a sectional patch for each of the string sections, since you need that combined unison within the section most of the time. It's also possible to use a patch that combines all of the five string sections into a single patch. You'll normally only use solo string patches if you (for example) need the distinct sound of a single violin playing something that doesn't relate to the rest of the strings (

    ).

    With the woodwinds and the brass, you'll probably use mostly solo patches, though this does depend on your preferences, how much orchestrational detail you want, and how quickly you want to work. Unlike the strings, each of the brass/woodwind instruments is given its own specifically notated line (when written for a real orchestra), though each of the seconds (or thirds or fourths in the case of trombones/French horns) frequently will play the same thing as the firsts. (How often the seconds play unisons with the firsts vs. playing their own line is an orchestrational detail with no real general guidelines.) Having, for example, two flutes playing the same thing creates a problem, though: most sample sets only have a single solo patch for each instrument, and layering the same samples on top of themselves will either make the sound too loud or too phasey. There are many ways of dealing with this. You could have no seconds at all. You could just never have the seconds double the firsts. You can write the seconds a step lower than the firsts (to ensure different samples are used) then pitch-bend the seconds up a step. You can use a different library for your seconds. You can use a single library that has separately-recorded seconds. You can use a patch that has a multiple-player unison for that instrument when you need to double (especially common for French horns). It all depends on your preferences and your libraries.

    Sectional patches in the woodwinds and brass that combine multiple instrument types (full brass, trombones + trumpets, bassoon + oboe, etc.) are generally used either because they're way faster than writing with solo patches or because, as a result of being recorded together, they have a better sound than combined solo patches. The limitation, of course, is that you get less control over the details of the orchestration this way.

    As far as getting a full sound goes, it's largely a matter of how you distribute your material across the orchestra. You might think that writing a bunch of separate lines and inner harmonies would sound fuller than a bunch of unisons and octave doublings, but this isn't necessarily the case. Especially in modern film scores, power and fullness, particularly in the brass section, tend to be achieved by having massive numbers of instruments all play pretty much the same thing. As a general principle, don't be afraid to give the same material to multiple instruments/sections at the same time -- doubling things like this will give you a rich and complicated timbre without overcomplicating the musical lines themselves. I think never doubling anything is the single most frequent mistake I hear novice orchestrators make.

  14. I haven't installed yet because I'm bogged down in other things right now (and the download will take a couple days with my connection), but your analysis of the sounds seems consistent with my impressions of the demos. I really liked the bread-and-butter sounds -- pianos, basses, drums, etc. -- but not so much the more obscure instruments.

    Part of the reason I upgraded is that I use Miroslav as a supplement to my primary orchestra libraries (for layering and such) and Miroslav's ST2-based player is a bit squirrely on my system. I think part of it is that ST2 is 32-bit and I'm bridging it into a 64-bit host, which will be resolved with ST3 being native 64-bit.

  15. A ReMix of the Sting Chameleon stage from Mega Man X that sounds similar to Barber's Adagio for Strings called "Adagio for Stings."

    I actually looked into doing this recently. The Sting Chameleon melody really kind of meanders when you slow it way down, which is a plus, but even at a slow tempo, its harmonic rhythm is still so much faster than the Barber piece that I think it would be difficult to make it work well.

  16. I bought 8dio's Steinway Legacy Grand purely because Steinway's have my favorite piano tone and it captured it really well. It's really deep and smooth. I didn't really care about anything else.

    I'll have to check out Legacy Grand. I, too, tend to prefer Steinways over other makes. (My current favorite sampled piano is the Steinway from QL Pianos.)

  17. Did anyone else notice the site's blurb that said Production Grand has a quasi-binaural player's perspective mic? I think this is the one used in the Bach demo, and it sounds really good. Binaural miking is a great idea, and I hope other piano library devs copy it. I'm not sure I would necessarily use that mic in a mix, but I would love it for playing/recording on headphones.

    A thing that I don't think people talk enough about regarding samples is how actual physical instruments and real performances can be evaluated as good or bad entirely independently of any sampling considerations, which is to say that just being "realistic" does not necessarily make something good. I bring this up because the thing I don't like about Production Grand has nothing to do with its size or design philosophy. What I don't like about it is that it samples a Yamaha C7. Yamahas are popular, marketable, and versatile, and in that respect it totally makes sense to sample one for this library, but I personally don't care for Yamaha's pianos -- I find the tone too bright and the keys' action too stiff. And my experience with other Yahama-based piano libraries has been that my misgivings about real Yamaha pianos also apply to samples of Yamaha pianos. This is a thing that "realistic" samples can't remedy, because my objection is to the instrument itself and not to the samples' representation of the instrument.

  18. Automating levels on individual tracks or buses to even thing out over time is probably the way to go. Assuming the master fader is post-effects, you don't want to touch that, although overall level could be controlled somewhere else such as a bus before the master, or the output of a plugin on the master channel. The main thing to consider is whether changing the relative levels of sections interferes with the impact of the start of a new section -- and that's a decision that can only be made on a case-by-case basis.

    As long as you have the headroom, it doesn't really make a difference whether you're reducing the level of the loud sections or increasing the level of the quiet sections -- the idea is just to balance them out in some way, and you can take care of getting everything back to the proper overall level using the limiter. If you've already set the limiter based on the loud sections, though, it's probably easiest to bring the level of quiet sections up rather than loud down, since that should let you leave the limiter as it is.

  19. There's a small possibility that there's something wonky with the MP3 encoder your DAW's using. If you want to explore this, try exporting as WAV ans then converting to MP3 with a different program (iTunes, Audacity, etc.).

    If the MP3 player that doesn't play the file correctly is old, though, it may just be a problem with the player, since back when people first started using MP3s, no one typically encoded at high bitrates. The player may just not be designed to handle them.

×
×
  • Create New...