Jump to content

Moseph

Members
  • Posts

    2,040
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Moseph

  1. AKG K702 High-end studio headphones (the more expensive version of RexAsaurous's K240s, basically), but I also use them for general listening.
  2. I've had this or something similar happen to me before. I think reloading the samples has always fixed it for me, so I've never tried to figure out exactly why it happens.
  3. And if they're asking you to sync the composition to the video (and they don't just want general music that they'll cut to the picture in editing), make sure they give you a locked video -- this means that all of the cuts and scene lengths are finalized. If they go back to the video and change things after you've done the composing, either you'll end up having to waste time rewriting the music (bad) or they'll hack your existing music up to fit the new edit (also bad). Amateur directors especially tend not to understand how much of a problem this is for a composer, so make sure the video they give you is locked, and throw a fit if they tell you it's locked and then change it.
  4. Also, it can be a good idea to specifically pick a tempo that will make events that you want to hit in the video fall on beats rather than between beats. There are online utilities (e.g. http://www.fransabsil.nl/htm/eventhit.htm) that can help you calculate this, and some DAWs (Digital Performer) have a hit calculator built in.
  5. Assuming the MIDI in feature works as expected, this should work to capture MIDI output from FL Studio: Install LoopBe1 (or a similar program). This is a virtual MIDI router -- it's like using MIDI cables to hook hardware devices together, but with software. With LoopBe1 running, there should be a new MIDI output assignment (LoopBe1) available to FL Studio's MIDI tracks. Set the FL track to output to LoopBe1, then set LoopBe1 as the MIDI input source in the virtual keyboard. EDIT: Or are you just asking if you can use the virtual keyboard to control a VST in FL? If it can't do that natively, you could certainly use LoopBe1 to patch it into FL as a controller.
  6. As far as getting a properly warm orchestra sound, I've found these things helpful: 1) Idiomatic arrangement. For example, don't use a four french horn unison patch for your (single) horn line and leave it at that. Write four individual horn parts in harmony for four individual horns. Use two of each woodwind. And double your parts liberally. Try using the basses to double the cellos an octave lower. Try doubling the violin 1 part in the viola. Try doubling the violin 1 part in all of the woodwinds. Things like that. 2) Read up on how to use a multiband compressor (or even single-band) on the master channel. When done right, this can help squish everything together so it sounds more cohesive and less like a bunch of separate instruments off in their own worlds. 3) Running everything through an analog tape sim can help with warmth (I use Nomad Factory's Magnetic II for this). I've also started using EQs that deliberately color the sound in an analog-y way rather than using more transparent EQs. 4) Reverb settings are important (though I'm not sure the best way to manage a wet library like EWQL SO since I use mostly very dry orchestra samples). Using a reverb with a lot of pronounced early reflections can increase fullness and warmth. 5)This is not so much a warmth issue as general programming, but I use a two-axis virtual CC controller (AnalogX MIDI Mouse Mod) routed through a virtual interface (LoopBe1) to program velocity crossfades. I have vel crossfade mapped to the y-axis and expression (CC 11) mapped to the x-axis. This lets me fine-tune levels more than using only crossfade would. For example, if there's an unpleasant break between two velocity layers that I want to avoid, I can run the crossfade up just below the break and then boost expression instead. I find that recording all of this live is faster and produces better results than drawing curves, and you don't need any hardware to do it.
  7. For most of the staccato notes, I'd try to find a slightly longer articulation to use instead. VSL has an articulation called short detache that I would use in this situation; EWQL SO probably has something similar but it may have a different name. The fifth, sixth, ninth and tenth staccato notes (that is, the really fast ones at 0:12-0:13) I would leave as staccato, or if EWQL has spiccato articulations I might also try that.
  8. Can you post an audio example of what you're working on? Depending on the context, it might sound more natural to use a marcato, portato, or detache articulation rather than a staccato, which would give you a bit more sustain on the note and might help it better blend into a legato passage. You sometime have to play around with the articulations, and the one you end up using may not always be the obvious choice. EDIT: I'm not sure specifically what other articulations EWQL SO has to choose from since I don't use it; I just know it has a lot.
  9. This actually brings up a good point that hadn't occurred to me since it's never really been an issue for me personally -- keeping everything in the DAW, even temporary sketches, gives you temp tracks that are much easier to work with compared to using an external program. So if part of the track is already finalized, you can write directly against the finalized portion instead of writing the new part in isolation.
  10. A lot of DAWs (Sonar included, since you mention that specifically) have their own notation views, but they're not as sophisticated as a program like Sibelius that's dedicated to notation. I sometimes sketch notated scores by hand on paper (because, like you, I often find it more convenient to know what I'm going to be recording before starting to deal with the DAW), but I've never found DAWs' notation view to be at all useful. If I need to notate something, I'll use Finale because Finale does it well. If I need to create a sample-based MIDI realization, I'll use Sonar's piano roll because notation isn't accurate enough to portray the recorded performance nuances. And if I just need a quick sketch of something to work it out before I record, I do it on paper. The only circumstances I can think of in which DAW notation would be useful would be for printing a quick and dirty instrument part for a player without having to bother with Finale, or if you just feel more comfortable with notation but don't need the full features of a dedicated notation program and don't want to/aren't able to do it on paper. If you fall under the latter case and are using the DAW notation view to compose, you'll still end up either needing to perform/record what you've written or go back and significantly tweak/humanize it, since anything you enter by hand in the notation view will be snapped to the grid. It doesn't seem like it would save that much time compared to performing/exporting MIDI from a notation program or writing on paper. If it feels right to you, though, there's no reason not to do it. One of the reasons I've avoided using the DAW notation view is that when I work with notation I prefer to get away from sampled instruments entirely and just use a piano -- when the sampled instrument is there, the temptation to craft a performance and figure out the keyswitching before I've fully considered the arrangement is just too strong, and I end up wasting time making the instrument sound good when I'm not really even happy with the arrangement yet. Notation view in a DAW just seems like the worst of both worlds to me. EDIT: Regarding Notion specifically, I believe it's ReWire capable, meaning that you can link it to a ReWire capable DAW and use the DAW to control Notion's playback, etc. I've never used Notion so I don't know exactly how it works, but I can't see it being that much more useful than a DAW's native notation view, since even if you have a full score in Notion you'll still probably have to use a DAW piano roll and/or perform the music to get the humanization right. TL;DR Notation programs and DAWs do fundamentally different things, and integrating the two may not significantly streamline your workflow compared to keeping them separate unless you're unconcerned with humanization.
  11. Adding tiny amounts of noise to mask the audio errors that occur when you convert something to a lower bit depth. There should be an option in your DAW that lets you set whether dithering occurs when you export to a wave file/mp3. It's probably turned on by default, and you should leave it on.
  12. Not sure if everyone commenting knows this already or not, but LMMS has a Windows version, so the OP may not be be interested in using Linux for music at all.
  13. No, USB drives aren't necessarily that bad. I've used one myself for samples before. My point with the drive Komplete 8 Ultimate comes on was that the drive is designed specifically as an install drive. I don't know what the technical specs on it are, but I assume it's about the cheapest, slowest drive that it's possible to manufacture. Even if you could install to that drive (which would involve copying already installed folders back to the drive) there's no way to know what its performance would be like, and formatting the drive voids the warranty so if you ever needed to reinstall, NI would be displeased with you when you asked them for a new install drive.
  14. They send you a hard drive, but you can't run the samples from that drive because it's slow and USB-powered and everything on it is in uninstalled form. You use that drive to install Komplete Ultimate to your own drive so you don't have to deal with dozens of DVDs.
  15. Not doable as far as I know, though I don't really use the step sequencer at all. I think if you want to do that, you load the individual samples into as VST sampler like Cyclone or Session Drummer (both included w/ Sonar) and then trigger the VST.
  16. Might be a combination of the address change plus being non-US, or maybe that's their policy for credit cards from particular non-US regions.
  17. Is it just in this project? If you start a new project, does it work? It looks like the problem is that the program is trying to read a location in RAM that it isn't allowed to access, although I have no idea what would cause that.
  18. Also -- there are things to hate about every DAW. Switching DAWs, even if you generally like what you're switching to, doesn't guarantee completely smooth sailing.
  19. I can't find the mention of Finale in the system requirements (link?) but probably what they're referring to is the fact that if you want to use it with Finale, you need such-and-such a version of Finale, because the old versions of Finale -- though some of them came with an integrated version of GPO -- don't support VST and won't be able to run GPO 4. They single Finale out for mention because Garritan and Finale are owned by the same company, but GPO works with any VST host.
  20. Meteo, if you want to make a final attempt to avoid reinstalling, I can upload the QT DLLs from my C:\ProgramData\East West\playgui folder and you can see if those work for you.
  21. And the entry level edition of Sonar is ~$100.
  22. Shot in the dark here, but it sounds like the registry cleaner may have messed up the Windows environment variables that tell programs where DLL files live. The idea is that possible locations for DLLs and other files are stored in a list which gets referenced by programs that need the files -- the program code then doesn't need to be concerned with where these files actually are; it can just ask for the file and Windows will consult the list, find the file, and give the file to the program. So if that list gets screwed up, programs can no longer find the files even if the files haven't moved. East West, at least on my machine, has an entry in this list that points to several versions of QTcore4, so we'll need to check if it's there on your machine. (I'm assuming you're on Windows 7.) In the start menu search, type environment and select "Edit the system environment variables." In the box that comes up, click "Environment Variables ..." in the lower right. In the box that comes up, select the "Path" variable and click "Edit ..." Copy the text string from the "Variable value" field and post it here. For comparison, the string from my path variable is: C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\PROGRA~1\MATROX~1\System64;C:\PROGRA~1\MATROX~1\System32;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\ProgramData\East West\playgui;C:\Program Files (x86)\Java\jre6\bin\;C:\MinGW\bin\;C:\MinGW\msys\1.0\bin\;C:\Program Files (x86)\QuickTime\QTSystem\ Note the East West entry. I suspect that your system doesn't have this entry and needs to, though the reference may need to be slightly different on your machine since the files could possibly be installed in a different place. EDIT: Probably the reason that putting the QT files in the FL Studio folder didn't work is that while putting a DLL in the folder of the program that needs it allows the program to find the DLL without consulting the path variable, the versions of the QT files that you used were probably not the specific versions that PLAY was looking for (which I expect are the ones found in the C:\ProgramData\East West\playgui folder). A search of my computer reveals a bunch of different QTcore DLLs in various locations that have the same name but are all different sizes. Speculation again, but the reason PLAY works in standalone mode but not with FL studio is probably that PLAY itself knows where the DLLs are without referencing the path variable since they're part of the program install, but calling PLAY from FL Studio probably causes FL Studio to call the DLLs rather than letting PLAY call them, and FL Studio needs to look at the path variable to find them. This is all suggested by the fact that putting the QT DLLs in the FL Studio folder almost worked -- it indicates that FL rather than PLAY is looking for the DLLs.
  23. Learn how a choir pronounces words. Singers deliberately use different vowel sounds than what are commonly used in everyday speech -- in particular, they avoid things that resemble the schwa sound (ə) -- because it's generally agreed that doing this produces both a more pleasant sound and more easily understood words. For example, the phrase "I will have a pencil" in common speaking diction may sound something like "Aye whil hav uh pensul," whereas with singing diction it may sound more like "Aahy weel haahf aah pehnsihl." Singing diction is a complicated subject; if you've sung in a choir it will definitely come more naturally to you. But even if you're not interested in getting into diction in great depth, it will at least help to be aware that you may need to do kind of unintuitive things with the vowels in wordbuilder in order to get it to sound like a trained choir. You can't necessarily just use the default vowels it gives you and expect it to sound good.
  24. Some people report that drawing tablets cause less hand strain than mice. I have one, and it's perfectly usable for precision studio work, although I find it difficult to move as quickly as with a mouse. Monoprice has some good cheap tablets if you're interested in going that route.
  25. Some people use sample sets that take several minutes to load, which makes reloading projects a pain.
×
×
  • Create New...