Jump to content

dannthr

Members
  • Posts

    1,127
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by dannthr

  1. You can build measured sonographs at this site for most major headphone models: http://www.headphone.com/technical/product-measurements/build-a-graph/ I have the 7506s as well, but for my next pair, I'm probably looking at getting these badboys: http://www.sweetwater.com/store/detail/HD650/
  2. Tutti writing has its place. Zimmer-esque sounds are well achieved through that kind of writing because, in general, Zimmer writes very simply. I wouldn't buy Symphobia just for its sound. I mean, the SAM guys know how to record warm and powerful samples, it's their strength, so don't misunderstand me. It's just that Symphobia's THING is sketching and putting stuff together fast. Personally, I don't see myself getting this library because when I spend that kind of cash, I expect a little more flexibility in my instrumentation. Especially if I'm writing any sort of legato strings. I want to be able to control whether my basses and celli are separated by 5ths or 8vas. I also like to have timbral control over my instrumentation. I mean just because I want a contrabassoon and a flute to be in tutti doesn't mean I want my oboe and clarinet to also be in tutti. Especially if I want that glassy sound that bassoons doubled with flutes have.
  3. It's designed for like.... TV composers and sketch-crunchers. That's why all the demos have how long it took to scramble together.
  4. It's okay, I mean Project SAM always has a great sound. They just know how to record for reals. But the thing about Symphobia, which is why it's unlikely I'd ever get it, outside of its pricetag, is that it's Ensembles only. Which means everything is tutti. So like... it'd be like, okay, I want a clarinet, or something, but you can't have the clarinet alone, it has to be in unison with say a bass clarinet and a contrabass clarinet or you can't have just a clarinet but you have to have a clarinet in unison with an oboe or something like that. I'm just a control freak, I guess. Everyone is just hoping that SAM will do a great strings library or go back to Symphonic Brass and update the programming. I think Symphobia has its uses, for sure it's great for a quick sketch, but if you want an instrument to play alone, well, you need another library. (And if you think I'm wrong, listen to Swash Buckler again and notice that the violin does not play alone.) EDIT: I am sort of curious about their multis though: Particularly City Noir and Chainsaw Attack
  5. I wish I had a gig for commercials--they average between 5k-20k for a couple of minutes.
  6. If you can make Parachute Pants cool again, I would really appreciate it! I have like... half a closet of clothes no one wants. >_>
  7. How can you ask for free music with absolutely no guidelines, and not even thank the man when he delivers? All you can say is "cute?" Lame. Tell you what, for $300 I'll make you a Leaving on a Jet Plane track perfected to your tastes. Until then, be grateful Yoozer made you ANYTHING.
  8. That's a really good point. Definitely try to see if you even like making music 8+ hours a day for someone else before you jump into it. Not to mention any crunch time there might be, and there probably WILL be crunching. For me, it was drawing. I love drawing, but I can't do it for someone else. It shall always remain my hobby. I just don't like that kind of commission work. EDIT: Though, I do want to mention something re: fatigue. If it's something you really want to do, if it's one of those dream jobs, then realize that fatigue is something you can work through. Like a muscle. Many composers I know carefully manage their fatigue during the course of a soundtrack development. Maybe they write some days and render other days. Maybe they get their material out when they're hot, and maybe they just work on performances when they're not feeling inspired. But in the end, most all of them face the situation where they're not inspired at all, and yet, they work through it, and maybe their work isn't the best that day, but it's done.
  9. Find a record of this song: http://www.youtube.com/watch?v=UdaHCLlBkWU And then add some beatz with another record--then become a famous white rapper!
  10. Honestly, in today's world, it's easy to combine ANYTHING with a passion for computers. If you're adept at computers, then there are quite a few options as far as day jobs go, if you want to combine it with anything, well, like any filter, that narrows your choices. Keep in mind that if you want to explore the artistic side of music, then you don't need to go to school for it, you can, and it can help, but you don't need it. Everything you need to know about music composition, theory, orchestration, is at the very minimum, 50 years old, and on average over 100 years old. There are a ton of resources available. So I wouldn't worry about the music half. Plus, if you go to a liberal arts college (which I always recommend), then you'll more than likely have access to music courses even if you're a comp sci major. I make a living writing music, but my BA is in English. Now English isn't necessarily one of the great tech degrees where I get job offers right out of school, but my point is that you can study anything you like in school and still do music. Yoozer makes a great point, but at the same time, while it's important to think about protecting yourself, you can combine them if that's what your wish is. Now, if you want to do music technology and synthesis and application development, you'll want to go comp sci all the way. Your love of music will guide your focus even in the world of programming, that's just part of your personality. Pezman, don't be afraid to explore working for other companies. Of course, there will always be the dream job, but definitely be willing to work for OTHER music app developers, there are a ton of great ones. I have a friend who would like to work at Pixar, that was her dream job in college. But right now, she's got a great job making the graphic interface for Lego Universe. Great job, fun project, not her dream job, but that's okay. Be willing to expand your dream. It's easy to get attached to one specific thing, but you might be surprised what satisfies your ambitions once you're out there.
  11. Not really. Perception is subjective. What I hear, maybe someone like Snappleman can not. What I can do, however, is quantify audio degridation through a scientific analysis. Whether you're interested in that or not is irrelevant to my pursuit of the study. Whether you can hear above 15khz or not is irrelevant to my study completely.
  12. I actually did begin an analysis project. I started creating comparison graphs showing the detail loss from Wav to 320kbps to 192kbps to 128kbps. The expiriment was a test to see if there would be less loss with songs that were subjected to more processing during their production. My goal is to test the following song/types: Heavily processed orchestral film score Lightly processed orchestral film score Hardly processed classical string quartet Modern Jazz Trio Heavily distorted modern rock Accoustic or mostly accoustic rock Techno-rock fusion Trance/dance (little to no distortion) etc... So far, I've only analyzed the first four, and it's only slightly confirmed my hypothesis. I agree that there are artifacts that won't show up in a graph that you will hear and dislike. I've also found that mp3 compression tends to average and add sub-bass tones under 20hz, for some reason, which is not good for people who have sub-woofers that can reproduce 5hz sub-bass tones. Cheers,
  13. Here is a spectral analysis of the two versions. This sampling was taken by recording the source across an S/PDIF recording interface (Sony/Phillips Digital InterFace) set at 16bit, 44.1k. There was no D/A/D conversion whatsoever. The light blue is the 320kbps, the dark blue is the 128kbps version. As you can see, this graph represents the ideal natural audible range for human beings, but of course, actual audible range is completely subjective to the individual. If you are unfamiliar with Hz and pitch, you should know that it is logarithmic. What that means is that essentially, each octave in the musical scale sees a doubling of the previous frequencies. So an octave above concert A (which is 440hz) is 880hz and so on. Anyway, back to the graph. What you'll notice is that, aside from the very ends of the spectrum, the 128kbps maintains a reasonable amount of spectral detail. With the exception that the 128kbps version simply has fewer valleys and troughs, they resemble eachother well enough to be very convincing. This is actually really important because I generally assume 320kbps to be overkill for most people's listening purposes. However, if you look at the very top of the graph, you'll se the last little section ranging from 10khz to 22.5khz. This section, while numerically representing an entire HALF of the human audible range, is only the very last octave. The instruments you'll more than likely find making significant appearances in that part of the hearing range would be cymbals, various drums, and things like whistles or high resonant pitches--maybe some synths, etc... However, despite the fact that it is the last octave, as you'll see on the graph, there is a NOT INSIGNIFICANT loss of detail between the 320kbps version and the 128kbps version. But if it's such a big deal, why can't Snappleman hear the difference? Because he uses crappy monitors. The following is a frequency response measurement taken by HeadRoom, a consumer reports agency for head phones. It is the frequency response of a conventional, piece of crap, iPod ear-bud: What you'll note is that the iPod ear-bud loses detail at the high frequencies in the same way that the 128kbps version does. This is an important relationship because 128kbps mp3 files are perfect for people who want to store a large volume of music in a small space, it's why we call it compression, and it's why Apple doesn't bother making better headphones. If your headphones lose the detail up there, then it doesn't matter if you have it in the first place. However, for those of you who enjoy the listening experience or are curious what those of us who claim to hear the artifacts are actually hearing, I have isolated the 10khz to 22.5khz frequencies in the recordings and have put them in a wav file for your listening pleasure. However, to make sure that you can hear the difference, I have dropped the pitch of the isolated parts one and a half octaves. This will bring the isolated range into a more audible range, even for iPod budders, and because I used non-destructive pitch shift with absolutely no time-shifting, it will strip, essentially, the similarities between the two versions away and you will more easily recognize the compression artifacting. I suppose to be fair, I won't say which version is which, but I want you to listen for the artifacting that gives it away. Typically you'll hear a very high pitched tinkling noise or metallic pulsing coinciding with higher pitched or louder sections or even just staccato sections. Link: 10-20khz isolated, with 1.5 octave drop You may also hear that the 320kbps version has a much more defined sound because it retains these high pitched details. With the right equipment and a pair of analytical ears, you'll be able to spot these compression artifacts at regular speed and regular pitch. This is important training for anyone who actually wants to go into mixing or mastering as well, as these kinds of ear skills are a necessary part of the discerning process. I hope that was enlightening. Cheers,
  14. I listened on Sony MDR7506s and could easily tell the difference between the two samples. The second one obviously had bitrate compression artifacts.
  15. Heh, Thought I'd spread the word. Academics strike again in this wildly HUGE selection of ABSOLUTELY FREE samples: http://wiki.laptop.org/go/Sound_samples Happy Composing,
  16. Sorry, it wasn't clear that's what you were talking about--I thought you meant you'd give a serious attempt at replying to your criticism.
  17. How about post when you have something of substance to inquire about or to say, eh?
  18. Maybe you don't remember them like you used to, or maybe ever at all. A, B, C#, D, E, F#, G#, ... Remember, that G has to be SHARP.
  19. To the OP, welcome to one third of my Kontakt wish list. I refuse to upgrade to a new version of Kontakt until they satisfy this wish list: 1) Integrate the usage of VST fx into the already established internal fx structure--if not for all VST fx then at least NI's proprietary fx like Guitar Rig. 2) Modular MIDI Ports. MIDI Ports should be able to be added and subtracted dynamically instead of this BS 4 limitation. 3) 64 Bit, for gods sake! Why not? Why has it taken so long? Lazy programming. They didn't properly program the original app with dynamic and separate porting designs in place and now it's probably a huge, messy coding-knot. Multiple instances of Kontakt is chez suckage. There's no way to reset I/O settings upon multiple instancing and Kontakt seizes complete control over the output devices, so you need multiple Audio I/Os to xfer from a slave comp to a host. Otherwise, used in a DAW it's... Okay. Additionally, Kontakt 3 is supposed to boast a limited but decent guitar amp emulation fx package--have you tried that out?
  20. Damn, I would love to work on this--Chance's music on LoTRO was fantastic, is this one of his tracks? Either way, it's a beautiful track and definitely deserves some lovin'.
  21. You should use them both: Here is a version with the neck in one channel and the middle pick-up in the other channel: http://www.dannthr.com/music/tests/lufiabothpickup.mp3
  22. What exactly are you trying to do that is different than what Finale, Sibelius, or any number of staff editors in almost every DAW does?
  23. Hey Z! It's sounding great, man! The only thing I would suggest, and I know you're knee deep in this theme and the music is really solid, but two things: Elfman is schitzophrenic when it comes to arranging for orch. He loves moving around the sections and bouncing the themes all over the place. Here's a really great ref for that: The theme is like 4 seconds long, but he uses it over and over and over again back to back to back because he changes up the instrumentation. Now, in regards to your piece, man, for me, it doesn't really get interesting until 1:15 when you change up the composition and the instrumentation gets a little spicier. However, consider things like this: Between 1:36 and 1:40 you have a little pizz motif going on with two similar 4 note passages, one modulated from the other--now imagine changing the instrumentation of the second one, throw in some brass, catch your audience a little off-guard. That's what Elfman does and it's what he does so well. The section immediately after that, I have a hard time with, it's just so... James Hornerish. Don't let us get so at-ease with the atmosphere, maybe use some of the classic choral elements Elfman always throws in. Give it a little more ghost. Keep us engaged. And seriously, don't be affraid to give the brass some big quotes! One of Elfman's best: http://www.youtube.com/watch?v=oFAfiLA97mM He doesn't let you get comfortable with it either, if he has a nice theme, he twists it a little darker, he'll throw something in there that doesn't let you get the warm fuzzies like your Sus4->Maj resolutions. He loves odd time signatures or odd measured phrasings, but most importantly, he loves odd scale modulations and flourishing around the orchestra. Anyway, man, those are my thoughts--keep up the good work! EDIT: Here's a great recent clip! http://www.youtube.com/watch?v=Bv9FuTGSLSE
×
×
  • Create New...