Jump to content

timaeus222

Members
  • Posts

    6,121
  • Joined

  • Last visited

  • Days Won

    47

Everything posted by timaeus222

  1. Oooookay, after 2.5 days---Monday Night (3 hours), Tuesday Night (3 hours), Friday Night (4 hours), and all day Saturday (12.5 hours), it's officially 99.99% done. Just some mastering left to do. Gluing, cutting, and trimming. Never a dull moment (get it? Get it?). I literally got the last 75% of the entire arrangement + final mixing done today. What a time crunch...
  2. Making good progress here. Almost done, but I may add about 10~20 more seconds before I call it a good length. A little more organic than my usual stuff.
  3. 0:35 sounds pretty muddy to me; I think you notice that. The soundscape is pretty neat. Sounds techy/digital. A lil too bass heavy though, and that's partially why it's muddy.
  4. Nah, I just figured if you wanted a loop slicer, a tool that can do that and more would be of interest too (which doesn't mean I'm talking about Glitch in this sentence). Though chopping and slicing are two different things. Chopping = gating, retrigger, shuffle, etc. Slicing = manual cutting up of an audio file.
  5. Sounds pretty good so far to me. Some observations: - Oddly very lofi cymbal at 0:26 - 0:30 strings can be stronger if you want, and I'd suggest it. - 0:40 strings seem to be falling behind the rhythm.
  6. Basically, just think about whether you want something for just percussive (like bird-cussion™ --> halc) or drum loops (Geist), or just about anything (Glitch).
  7. These two sources are ridiculously hard to make work together harmonically. My god. I'm about 60% done I think.
  8. True, but the end result is essentially the same with certain effects like gating and retrigger, of course. I assume you can use Ctrl+E to slice loops in Ableton, and that there's no real need to randomize where the DAW reads the loop (i.e. "shuffle" in Glitch v1.3) unless you're interested in granular synthesis I suppose.
  9. Yeah, it does those two things with retrigger. The FM modulator is pretty sweet too. Also, you can combine layers of effects and save presets that can be loaded onto any MIDI note, with each MIDI note capable of holding a set of differently modified effects. Effectrix works well, but personally, I prefer dBlue.
  10. I personally don't like traditional metal, but if it's made well, I would be okay with it being on the site. I would objectively compliment it. However, I'm not entirely against screamo vocals either. had some of that (starts at 3:19), and I still liked it. Granted, the production isn't super good, but ignoring that, it's still rawkin'. The only point where metal goes beyond my preferences is if so much of the track is screamo that there's little variation in the dynamics or texture.
  11. Luckily dBlue Glitch v2.0.2 is MAC/Windows. It'd be $60 but worth like $200.
  12. ...Yeah, those are weird keys to work with... jazzy minor key vs. major key.
  13. That happens to be a question that should give a very broad answer. Technically, every person has a different hearing "capacity". This is assuming they can hear perfectly fine in both ears. When a person first starts off music composition, often their ear adjusts to the headphones or speakers he/she uses. Let's say the person uses a pair of Sony MDR-7502's (even though they may be discontinued from certain areas) after using generic store-brand headphones. The Sonys have =2341"]this frequency distribution. Looking at that, you can see that it doesn't give much bass at all, and the treble is almost as weak. However, if those were your second pair of headphones, you probably couldn't tell anything was wrong because your ears had grown accustomed to those new headphones specifically. Now let's say you read some reviews online about some other headphones that you think seem better. Let's say the headphones you picked are Shure SRH240A. Those have =2801&graphID[1]=&graphID[2]=&graphID[3]=&scale=30&graphType=0&buttonSelection=Update+Graph"]this frequency distribution. You can see that there's a good amount of bass improvement and a bit of treble improvement. Specifics aside, it's obvious that the Shure SRH240A have a larger frequency response range than the Sony MDR-7502. Therefore, it is typically the case that when you replaced the Sonys with the Shures, you heard that the Shures sound better to your own ears, or at least different. Let's say overall you are content with using those Shures for a good long while. Over that long while, in theory you would be able to hear more than when you had the Sonys. Why? Because the wider the frequency response range, the more previously-unapparent instruments in certain songs become more apparent and present. How far people are along this which-headphones-are-the-best-for-me path dictates his/her hearing "capacity". The better audio equipment the person has, the larger the number of instruments that can be easily observed. That aside, the average person is typically at the moment in their headphone-purchasing path where they own and constantly replace generic earbuds such as skullcandies and iPod earbuds. Those people typically have hearing "capacities" close to that of the Sony MDR-7502 owners. Either too little bass and treble, too little bass or treble, too much bass or treble, or too much bass and treble. So really, hearing capacities are all over the place for the average person. Generally speaking though, the average person hears the lead sound first (typically vocals), followed by the bass, followed by the kick/snare, then the cymbals, then the harmonies, then the hi hats. Two exceptions are drummers likely hearing the drums first, and guitarists likely hearing the guitars first because of their inherent biases in favor of the instruments they each play. That's why you often see drummers air drumming to a song or guitarists air guitaring to a song. Commonly, the average person would probably hear 1~2 instruments at once on the first listen, and 2~4 if they were told to listen closely.
  14. A tiny bit too resonant with some lead sounds at higher notes, but that aside, the vast majority of this sounds pretty awesome. =)
  15. I have an interesting suggestion. What if (obviously for V6) instead of a Latest ReMixes sidebar on the right, we put it across the top of the page? People tend to look at the top of a page more often than the bottom, so mixes that currently are nearing the bottom of the page may get less recognition, but it seems reasonable to believe that OCR wants approximately equal recognition for the Latest ReMixes. (Or do you think this would draw too much attention away from the Latest Release/Announcements?)
  16. It's okay if people don't do this. Maybe they're not interested, but then again, you never know who is. You don't have to keep bumping it every few months.
  17. Personally, I just hear a melody or harmony in my head, and then hum it back, and find it on the keyboard, so I like that method.
  18. Yeah, when I try to write chords for every few (6~12) notes, they seem like somewhat unrelated 7th chords. I'm literally chopping up the source in chunks like that, but I had some more going last night. Getting pretty smoove right now.
  19. That's fine, I'm self-taught too. As for the piercing upper mids, I can't really give a very small range without being at home and looking at a downloaded audio file, but I'd estimate it to be 4000~7000Hz. If you try scooping the frequencies a little bit there, and raising and lowering the gain on that frequency band at that location, see if you can hear a difference that you like. If you raise it up, it should eventually get piercing enough for you to hear, but depending on the person it may be higher or lower than for someone else because everyone's ears are different. In FL Studio in particular, it's easier because the EQ module lets you see the frequencies being occupied so you don't have to use only your ears. Fortunately, that's what you use. The brighter the signal, the stronger it is. The thicker the signal, the higher the chance it has of clashing with something else nearby in the frequency range. Yeah, essentially the bass instrument frequencies and the kick frequencies are "slurring together". That's one possibility. Another possibility is that the drum sample itself is just bad to start with, and you'd need to accommodate for that with boosts in the strength frequencies and/or cuts in the problem frequencies. For example, ~4000Hz often for a little bit of high end click, 81~140Hz often for a low end punch, and 20~80Hz often for a low end boominess. Again, often, but not always. Probably a third possibility I'd think of is that you haven't yet learned about (or known to look up if you haven't heard of it) sidechaining. Sidechaining is a method where mixer track A's output pushes down mixer track B's output (at a certain slope and intensity, which can be experimented with) without actually doubling mixer track A's output. In other words, mixer track A is linked to mixer track B silently while A is linked to the Master track at regular volume. This is useful when, for example, you want your kick and bass to gel together. The kick can push the output of the bass down only as the kick plays, and the muddiness is avoided. The reason this is a better way than scooping the bass and boosting the kick (or vice versa) in the same frequency range is that you maintain the power/body of the tones for each part. The kick doesn't lose low end power, and the bass doesn't either, yet their frequencies don't clash audibly.
  20. I got like a minute done Gotta figure out how to integrate Edward's theme some moar. D:
  21. Yep, this happens with all new Windows Vista/7 computers afaik. I turn that off too. Just doesn't give authentic sound when it's on. It does not directly affect the exported mp3, it affects your audio output/playback. ASIO is an entirely separate driver from your Primary Audio Driver, so your DAW preview is natural since the output/playback in there uses ASIO. After exporting, your computer uses the Primary Audio Driver for audio playback, so it only seems like the mp3 was messed with. It wasn't. You just hear a modified playback.
  22. Yeah, you're on the right track. I'm not sure how else to describe my suggested process though. I can provide an example of the timbre I'm thinking of, but I wouldn't be able to tell you the specifics of how it's made since you're not using Zebra, you're using Massive. However, it's doable because I recognize that this suggested timbre was made with Massive, for sure. https://soundcloud.com/isworks/shreddage-2-nuclear-dubstep-by (0:41, 0:44, 0:51) Btw, I'm referring to the very first wobble sound at 1:04.
  23. Sounds a bit quiet to me. Maybe raising the volume of the rhodes and adding a bit of saturation could help. Also, I don't think the 808 snare works in a jazz context. The kick is OK, but the sequencing is odd to me at times.
  24. Sounds like B/W-esque. This doesn't sound realistic though. That aside, this sounds exactly like the original's notes, key, rhythm, tempo, and harmonies. If I'm reading you correctly, you were just doing this for fun, so... no comment on the conservativeness nor the realism issue. Since this isn't exactly intended for OCR, as I believe is the case, it doesn't have to be realistic. However, if you were to want this to sound realistic, this would be what you may end up going through: First of all, you would need resources that allow you to sequence realistically, and that pertains to sample libraries with keyswitching and MIDI CC, as well as your ability to work with them and write realistically. Realistic orchestral writing requires careful studying of the individual instruments in an orchestra (tuba, french horn, trumpet, trombone, violin I & II, viola, cello, double bass, oboe, clarinet, flute, bassoon, glockenspiel, piano, timpani, chimes, cymbals, bass drum(s), misc. perc. [such as clave, guiro, vibraslap, etc], and harp) and how they sound in real life. Basically you'd have to reproduce that quality with careful velocity edits, keyswitching (using particular articulations---types of expression---at any given time), and MIDI CC (for volume swells to add that extra piece of realism). Not to mention there's still some mixing involved in the forward/backward stereo imaging (reverb + minimal delay), left/right stereo imaging (panning), and making sure the reverb sounds cohesive all across the board. Something to keep in mind, perhaps.
×
×
  • Create New...