Jump to content

Moseph

Members
  • Posts

    2,040
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Moseph

  1. I've only used Editor. My understanding of the difference is that Studio has a multitrack editor built in (of which Melodyne Editor is a component) and is way more expensive. Editor can still do the DNA polyphonic note adjustment stuff on its own.
  2. Vvvvvv for Vvvvvvendetta
  3. Yeah, listening to things quietly would also have the same effect. The high and low ends of the frequency spectrum are harder to hear at low volumes.
  4. The un-EQ'd version sounds pretty good to me. EDIT: Do you by any chance listen to a lot of music on iPod earbuds? I ask because because the EQ'd version sounds like earbuds; it might be that you're mixing the music so it sounds like what you're most familiar with.
  5. Here's a direct link to the paper that the blog post summarizes. My comments refer specifically to the paper and not the blog post. So they acknowledge that their methodology is inaccurate for Western music, and probably for non-Western music, and then they go ahead and use the methodology anyway? Seriously? The problem with a one-size-fits-all approach is that it doesn't accurately represent how the sales were/are used within their cultures, so it doesn't tell us how congruent with the harmonic series the culture's actual body of music is. If you want to find a culture's musical congruency to the harmonic series, look at actual representative pieces of music from those cultures and use those actual pieces of music to generate your intervals. If you want to then draw conclusions about scales based on the pieces that use those scale, go ahead, but intervals within a scale are not equally used. You cannot treat them as equally used. I don't care if dealing with this is difficult. Not dealing with it means that your results will be inaccurate, and you won't know how inaccurate.Probably the most useful thing (in terms of Western music) that we can get out of this is that the intervals possible in the Western diatonic collection (i.e. all of the modes collectively) are relatively congruent with the overtone series. And even that doesn't tell us anything that we didn't already know. Music theorists have been obsessed for centuries with mathematically simple intervals -- which is to say, intervals that are congruent with the harmonic series. I won't even get into the fact that early Western modes aren't really even the same things as the modes they looked at in the study. I don't know how well the study deals with non-Western musics since I'm not as familiar with them, but I expect similar issues to be present with them. Where's Gario? I'd be interested in what he thinks of the study.
  6. It doesn't sound terrible, but it's definitely skewed toward the mid-range. EDIT: Could you post the un-EQ'd version for comparison?
  7. I figured it would be more complicated than it sounded. I'll have to give it some further thought. Latency isn't a big deal to me. I'm not sure about VSL, but the language used in the Play manual makes it sound like you can run on a network with a single license. ("If you are running the PLAY System on a network and a library’s files are on a different computer than the PLAY Advanced Sample Engine accessing those files, then the iLok key needs to be in a USB port of the computer where the PLAY Engine is running.") Or does the FX Teleport way of networking require that both computers access the libraries in a way that requires a license? (EDIT: Rereading the documentation for Play and FX Teleport, this looks like its the case -- the Play manual is probably just referring to accessing the library files on a shared network drive rather than through something slaved to the DAW via FX Teleport or what have you. Needing multiple licenses effectively scuttles my interest in trying to go the network route.) The biggest issue may be that I'm running a 32-bit OS (and would presumably be running 32-bit on anything I bought used, unless I upgrade). An article I read put the RAM footprint of VSL SE loaded in its entirety at around 5 GB, which means I probably wouldn't be able to concurrently load on a 32-bit slave all of the VSL instruments I'd want to use, anyway. Probably best just to wait until Windows 7 is supported by everything I use and get a new computer then.
  8. Currently, I use a Core 2 Duo XP SP3 laptop w/ 2 GB RAM (maxed capacity), and I am looking for ways to expand my computing power on the cheap. This is for the purpose of sample library access and nothing else. I am exploring the possibility of grabbing an older system (or two) off of craigslist, putting 3 or 4 GB RAM in it/them, and linking everything to the laptop with FX Teleport or something similar. Does anyone here use this sort of master/slave setup for audio work? Does it work well, or do you think I'd be better off just buying a new system to replace the laptop?
  9. Sounds Online has a two-day sale on Symphonic Choirs for $195 (until Jan. 15th) if anyone wants it for cheap.
  10. DPI settings, maybe. On the Settings tab of Display Properties, click advanced. Go to the General tab, see what the DPI is set to, and fool around with the setting.
  11. I'd suggest starting with a large-scale outline of some sort before you even open your sequencer so you have something to fall back on if you can't think of anything to do (this post in the epic music thread shows a type of outlining that I occasionally do). It might be as simple as deciding, okay, I'm going to repeat this bassline for forty-five seconds, then use this bassline for thirty seconds, then go back to the first bassline. The point is to find something that makes you think about the piece as a whole before you start fiddling with details such as sound-design. Another fun way to approach this is to take an existing piece of music that you like, break it down into sections, and build something of your own on the sectional outline that roughly follows the musical development of the original. You can pull ideas from the original if you get stuck. If sound selection trips you up, maybe try writing notes first just using generic sounds like piano, and save sound selection until last.
  12. Thought people might be interested in the process I used to create this piece. I began with this sketch. Nothing fancy, but the three-and-a-half measures at the top contain most of the elements of the piece (labeled 1-5). The percussion pattern, notated with X's, didn't ultimately get used. Number 4 says "brass triad in second inversion," if you can't read it. Number 5 is an upward trumpet gliss. Once I'd decided what these elements were, I put them in order on the lower staves as a way to organize my thoughts. (The i's signify inversion of a motive -- the only inversion I ended up using was 1i). The final piece pretty much retains the ordering of materials given here, with some deviation from it between "soft perc break" and the end of "loud perc break." You should be able to follow the piece while looking at the sketch if you watch the clock times, which I've marked for your convenience. After the sketch, I orchestrated things in Finale. (It would have been faster to do it directly in Sonar, but my computer chokes if I try to load my complete VSL orchestra, so I can't use VSL to work out an arrangement.) It's a pretty straight-forward working-out of the materials. Didn't spend a whole lot of time on it; I mostly just wanted something to use StormDrum with. When I finished, I brought the MIDI file into Sonar and programmed the VSL performance. I bounced this down to a .wav and imported that into a new project where I programmed percussion (StormDrum 2 with a bit of VSL). After getting the percussion just right, I went back to the VSL project, tweaked a few things so they'd sit better in the mix, re-bounced, and combined it with the percussion. Elapsed time: a few days I know the percussion is over the top. It's my trial run for StormDrum and I like the Godzilla patch. Shut up. Guess where most of my melodic material (1, 2, and 3 in the sketch) came from. Check out for the answer.
  13. Might be. I don't have a definite answer for that, but I'd be interested in knowing. I guess physical access wouldn't present the same sort of problem that it does for disk-based drives, but on the other hand, there's still a finite transfer rate that would have to be balanced.
  14. I was actually unprepared for how good the clip ended up sounding, and there are still things about my sample use that could be tweaked to further improve how it sounds. (A new computer would help; I've maxed out my laptop at 2 gigs of RAM and can only have two wordbuilder voices loaded at a time before things start getting wonky.) This is NOT one of those libraries that makes everything sound good. In fact, it will make things sound pretty terrible if it's not used well. I don't know what your audio background is. I was able to jump straight in and come up with this clip within two or three days after installing because the library works almost exactly how I expected it to work, and the wordbuilding process is similar to other things that I have a lot of experience with. If you've never done any heavy-duty audio editing (as in, splicing complicated sounds together so you can't tell that there's a splice), there may be a very, very steep learning curve. Note that a couple of posts ago I was passing judgment on the way other people were using the library even though I'd never used it myself. That's how familiar I am with the editing process involved. This is probably a good metric: If you can look at YouTube tutorials and identify the flaws in the way they do things and articulate how to do things better without ever having touched the software, you're in good shape to jump right into it. If you can barely follow what the tutorials are doing or can't think of any way to improve the sounds they come up with, then there will be a learning curve. Being good at editing is probably the most important part of getting good results. The wordbuilder is a mini multitrack audio editor, and it needs to be approached as such. The back of the box says "type in words for the Choirs to sing," but this is misleading. The library certainly will not sound good if you just type things and use the defaults that it gives you. What you're really doing when you type is selecting several individual samples per note. You cannot think in terms of words; you have to think in terms of these individual samples, which need to be combined convincingly to simulate words. It's like fixing a rock band's lousy studio session -- you start with something that sounds sort of okay, and you need to turn it into something that sounds awesome by sliding all the little pieces around. Having a good idea of what a choir should sound like, particularly in its vowel use, is also important. You can't rely on the pure vowel sounds the wordbuilder gives you by default; almost all of the vowels in the clip I linked are layered combinations of the basic vowels. Real-life choral experience helps. And this probably goes without saying, but you have to be extremely patient. To get a good performance, you need to deal individually with each note and each transition between notes. When your first editing approach doesn't work, you have to be resourceful enough to get the sound you want some other way. So, the two big questions to ask yourself before buying are 1) whether you feel comfortable constructing music by editing together two to six samples per note and 2) whether you're familiar enough with the sound of a choir to know whether your editing sounds realistic. EDIT: I understand that getting the wordbuilder to work in FL is an ordeal because FL doesn't support the VST-MX plugin format that wordbuilder uses. I think you have to use a MIDI yoke to link the stand-alone version of wordbuilder to FL. If you're still using FL (or anything else that doesn't do VST-MX), make sure you know exactly how this works before you buy.
  15. Have now spent some time with Symphonic Choirs. Despite my pre-purchasing doubts, I'm now a believer. Here's me learning the ropes.
  16. ModPlug can save as midi. Others probably can as well.
  17. Can't you just open it in a tracker? I think ModPlug does s3m files.
  18. Dark City (Matrix-y) Re-Cycle (Horror meets Alice in Wonderland) Twelve Monkeys (Time travel etc.) Chocolate (Girl beats people up, then beats more people up)
  19. They can make the game so quickly because they're reusing Twilight Princess code and stealing the plot from that fan movie.
  20. I think the piano use here is pretty well within the concerto mold. I like it.
  21. The first Super FX chip games would have been in development around the same time as Link's Awakening was.
  22. Still sounding awesome. The strings are panned left and right a little too hard for my tastes. It feels like there's a hole in the center of the orchestra where there should be second violins and violas. I'm not sure how you have your string patches set up, but if you have all your high string lines together and panned left, you might want to split some of them off and put them more towards center. I'm listening on headphones, though, which magnifies the left/right effect; it may not be an issue on speakers. I love the oboe solo at the end, but I think using B-flat major rather than G major (relative major in relation to G minor instead of parallel major) for the solo might work better. This would avoid the clash between the B-flat and B-natural from G minor to G major, which is jarring since B-natural is currently the first note of the solo. If you want the clash there in order to emphasize the distinction between sections, I understand, but I think the change in texture and mode is enough to set it apart. You should be able to transpose the entire section up a minor third without otherwise changing it if you want to see how B-flat major sounds. The ending still seems a bit abrupt to me. I think it's because both the melody and the harp end on the fifth degree (D) of the scale without clear harmonic support, which leaves things very open. I think closing it might be as simple as adding a G an octave-and-a-half below the harp's final D. Just an idea, though. If you like the openendedness, you might want to leave it as is. In the section from 3:02-3:55, the harp is too loud. I'm assuming you're going for realism, and you wouldn't be able to hear the harp over the entire orchestra in a live performance. If I were doing it, I'd back the harp off and double the line in the violins and/or all of the winds. (EDIT: Glockenspiel would work also, but too much glockenspiel can get obnoxious.) Other than that harp section, the relative levels of instruments work well. I'd be interested in hearing it without the compressor, though. The mix sounds a little squashed and mid-rangey right now, and the compressor may be contributing to that. I generally use this harmonic exciter on my orchestra mixes. I find that it opens up the sound a bit and gives the mix more realism, and I think it would sound good on this mix (but don't turn the low contour dial up very high). This is a really nice arrangement, all in all.
×
×
  • Create New...