Jump to content

timaeus222

Members
  • Posts

    6,128
  • Joined

  • Last visited

  • Days Won

    49

Everything posted by timaeus222

  1. Ignoring every other condition, yes, a 16 minute piece is acceptable, but not recommended unless you are confident in your source breakdown abilities and can accurately write out the source usages where the claims are viable to act as true source usage. In fact, source breakdowns are recommended for remixes longer than 7 minutes, though for 4 minutes it's still very helpful.
  2. There aren't all that many free ones out there. :( That's why I like using FL so much. BlueCat FreqAnalyst is alright, but it's just a spectral analyzer. s(m)exoscope is a fantastic spectroscope that I've found to be extremely useful (it can help with loudness, transient processing, and overall mastering).

    Voxengo GlissEQ could work. It's free as a demo, but the full is $100, and the demo does have limitations.

    Demo: Setting up the Visual Spectrum -

    I'm unsure of any free plugins that are as capable/intuitive as some commercial plugins. FabFilter Pro-Q would be my suggestion of a good commercial spectral analyzer/EQ tool.

  3. My theory is that Shuffle works like granular synthesis. It reads at random segments of specific lengths in a particular time frame while adding minimal fades to merge those segments together without clicks. Not quite as flexible as true granular synthesis, but eh, kinda similar, I think. While yes, it doesn't truly slice, Glitch has functions that work similarly to slicing. In general, Geist is a more accurate fit, but Glitch is also a fun tool to work with.
  4. You're in no way limited from using soundfonts, but if you do, production will probably be held at a higher regard. Regardless, direct MIDI instrument replacements aren't accepted. http://ocremix.org/info/Submission_Standards_and_Instructions
  5. Oooookay, after 2.5 days---Monday Night (3 hours), Tuesday Night (3 hours), Friday Night (4 hours), and all day Saturday (12.5 hours), it's officially 99.99% done. Just some mastering left to do. Gluing, cutting, and trimming. Never a dull moment (get it? Get it?). I literally got the last 75% of the entire arrangement + final mixing done today. What a time crunch...
  6. Making good progress here. Almost done, but I may add about 10~20 more seconds before I call it a good length. A little more organic than my usual stuff.
  7. 0:35 sounds pretty muddy to me; I think you notice that. The soundscape is pretty neat. Sounds techy/digital. A lil too bass heavy though, and that's partially why it's muddy.
  8. Nah, I just figured if you wanted a loop slicer, a tool that can do that and more would be of interest too (which doesn't mean I'm talking about Glitch in this sentence). Though chopping and slicing are two different things. Chopping = gating, retrigger, shuffle, etc. Slicing = manual cutting up of an audio file.
  9. Sounds pretty good so far to me. Some observations: - Oddly very lofi cymbal at 0:26 - 0:30 strings can be stronger if you want, and I'd suggest it. - 0:40 strings seem to be falling behind the rhythm.
  10. Basically, just think about whether you want something for just percussive (like bird-cussion™ --> halc) or drum loops (Geist), or just about anything (Glitch).
  11. These two sources are ridiculously hard to make work together harmonically. My god. I'm about 60% done I think.
  12. True, but the end result is essentially the same with certain effects like gating and retrigger, of course. I assume you can use Ctrl+E to slice loops in Ableton, and that there's no real need to randomize where the DAW reads the loop (i.e. "shuffle" in Glitch v1.3) unless you're interested in granular synthesis I suppose.
  13. Yeah, it does those two things with retrigger. The FM modulator is pretty sweet too. Also, you can combine layers of effects and save presets that can be loaded onto any MIDI note, with each MIDI note capable of holding a set of differently modified effects. Effectrix works well, but personally, I prefer dBlue.
  14. I personally don't like traditional metal, but if it's made well, I would be okay with it being on the site. I would objectively compliment it. However, I'm not entirely against screamo vocals either. had some of that (starts at 3:19), and I still liked it. Granted, the production isn't super good, but ignoring that, it's still rawkin'. The only point where metal goes beyond my preferences is if so much of the track is screamo that there's little variation in the dynamics or texture.
  15. Luckily dBlue Glitch v2.0.2 is MAC/Windows. It'd be $60 but worth like $200.
  16. ...Yeah, those are weird keys to work with... jazzy minor key vs. major key.
  17. That happens to be a question that should give a very broad answer. Technically, every person has a different hearing "capacity". This is assuming they can hear perfectly fine in both ears. When a person first starts off music composition, often their ear adjusts to the headphones or speakers he/she uses. Let's say the person uses a pair of Sony MDR-7502's (even though they may be discontinued from certain areas) after using generic store-brand headphones. The Sonys have =2341"]this frequency distribution. Looking at that, you can see that it doesn't give much bass at all, and the treble is almost as weak. However, if those were your second pair of headphones, you probably couldn't tell anything was wrong because your ears had grown accustomed to those new headphones specifically. Now let's say you read some reviews online about some other headphones that you think seem better. Let's say the headphones you picked are Shure SRH240A. Those have =2801&graphID[1]=&graphID[2]=&graphID[3]=&scale=30&graphType=0&buttonSelection=Update+Graph"]this frequency distribution. You can see that there's a good amount of bass improvement and a bit of treble improvement. Specifics aside, it's obvious that the Shure SRH240A have a larger frequency response range than the Sony MDR-7502. Therefore, it is typically the case that when you replaced the Sonys with the Shures, you heard that the Shures sound better to your own ears, or at least different. Let's say overall you are content with using those Shures for a good long while. Over that long while, in theory you would be able to hear more than when you had the Sonys. Why? Because the wider the frequency response range, the more previously-unapparent instruments in certain songs become more apparent and present. How far people are along this which-headphones-are-the-best-for-me path dictates his/her hearing "capacity". The better audio equipment the person has, the larger the number of instruments that can be easily observed. That aside, the average person is typically at the moment in their headphone-purchasing path where they own and constantly replace generic earbuds such as skullcandies and iPod earbuds. Those people typically have hearing "capacities" close to that of the Sony MDR-7502 owners. Either too little bass and treble, too little bass or treble, too much bass or treble, or too much bass and treble. So really, hearing capacities are all over the place for the average person. Generally speaking though, the average person hears the lead sound first (typically vocals), followed by the bass, followed by the kick/snare, then the cymbals, then the harmonies, then the hi hats. Two exceptions are drummers likely hearing the drums first, and guitarists likely hearing the guitars first because of their inherent biases in favor of the instruments they each play. That's why you often see drummers air drumming to a song or guitarists air guitaring to a song. Commonly, the average person would probably hear 1~2 instruments at once on the first listen, and 2~4 if they were told to listen closely.
  18. A tiny bit too resonant with some lead sounds at higher notes, but that aside, the vast majority of this sounds pretty awesome. =)
  19. I have an interesting suggestion. What if (obviously for V6) instead of a Latest ReMixes sidebar on the right, we put it across the top of the page? People tend to look at the top of a page more often than the bottom, so mixes that currently are nearing the bottom of the page may get less recognition, but it seems reasonable to believe that OCR wants approximately equal recognition for the Latest ReMixes. (Or do you think this would draw too much attention away from the Latest Release/Announcements?)
  20. It's okay if people don't do this. Maybe they're not interested, but then again, you never know who is. You don't have to keep bumping it every few months.
  21. Personally, I just hear a melody or harmony in my head, and then hum it back, and find it on the keyboard, so I like that method.
  22. Yeah, when I try to write chords for every few (6~12) notes, they seem like somewhat unrelated 7th chords. I'm literally chopping up the source in chunks like that, but I had some more going last night. Getting pretty smoove right now.
×
×
  • Create New...