Jump to content

timaeus222

Members
  • Posts

    6,121
  • Joined

  • Last visited

  • Days Won

    47

Everything posted by timaeus222

  1. I use a Samsung Galaxy S4 Mini, and I can see the Off-Topic forum when logged in.
  2. I think the rhythm guitars are oddly washy. Compare them with other mixes like these, and you should hear that you have a lot of reverb in comparison: https://soundcloud.com/isworks/shreddage-bass-2-colossal-force-by-tony-dickinson https://soundcloud.com/isworks/shreddage-ibz-ai-wo-torimodose-by-crystal-kings-arr-andrew-aversa https://soundcloud.com/magnus-engdahl/wing-it It makes it hard to hear distinct bass notes. Also, the kick drum is hardly present, besides its click, and the snare is fairly thin. Overall, I'm getting a distant feel that just doesn't sound intentional. I like the cover, but it has room for improvement on the mixing. You have a pretty great guitar tone, which is unfortunately thinned out somehow.
  3. Here's another way it can be said: https://www.youtube.com/watch?v=NQAVT6H_umo&t=62m38s So, I would suggest that you don't try to make super advanced stuff until you've further refined the techniques you are capable of doing now.
  4. My first guess is that you were using the same sample repitched for each note. That would give you a very unrealistic sound, because all it does is create audio artifacts which make the sample sound unnaturally darker at lower pitches or unnaturally thinner at higher pitches. It's noticeably not the same thing as a professionally-recorded sample library with optimized scripting, per-key samples ("chromatic sampling") with round robins (multiple rotating or randomized recordings of each note), etc. It might actually be better to use a soundfont instead of literal audio samples of an organic instrument, because at least those have... I guess two or three samples per octave? It's not 3-8 samples per key, but at least it's not one sample over 8 octaves.
  5. Let's be honest. You sound like you are too new at this to attempt something you're not familiar with. I would advise you to get to know your DAW better before trying to emulate something that you're asking so many questions about (and in a scattered manner as well). If I tried to truly explain how to do it, I would have to use too much jargon that you'd have to filter through anyway. I can give less general comments, though they basically are under the assumption that you are comfortable with your DAW, or at least kind of know what EQ, compression, and samples are in a basic sense. There's no genre change at 0:57. Metal is not easy, and basically requires that you use commercial samples or get a real guitar and a real amp if you want to get that particular sound. Guitar soundfonts and other free guitar stuff are pretty unconvincing IMO. The general sound of this remix is compressed electronic dance music. It uses very typical glitch hop or dubstep drums with a basic drum pattern that you could probably do, just with different samples. The hard part of mixing this is getting the bass to sit right, because it's easy to add clutter in the 200~400 Hz range without realizing it. Many instruments overlap there, so taking a few years to learn how to mix is what would be most beneficial to writing this kind of music. Even now I'm using jargon, so you should see my point when I say that you should take the time to learn it yourself as well, instead of just asking 50 questions in a row and hoping that you can pick it up right away from reading about it. You have to put into practice what you take in.
  6. I tried FL Mobile and I'm just not impressed. It totally ruins my workflow. So you may want to wait until FL Mobile 3 comes out, and check that out before deciding, because it's much more like the desktop version.
  7. Maybe short segments of finished tracks? Not sure I'd actually show album WIPs to the public.
  8. Well, it's not like Komplete 11 Select is an orchestra and nothing else. There are definitely synths in there. I don't remember which ones, but surely at least one of them has enough features for you to make synth pads (which are pretty essential for atmosphere or, well, textural padding that fills in the soundscape). Massive and Prism, amongst other synths, are included, and those can definitely make synth pads. Prism actually sounds like it could be really good for physical modeling (synthesizing physical instruments), which can make for some really interesting acoustic atmospheres (bells, plucked instruments, etc).
  9. No one's asking you to achieve 100% real. We're more so asking for realistic, as in, convincing to the general audience. So that means you should at least work on the feedback that I gave you, because that was bare minimum. I could have been more picky, and told you to try recording event edits instead of mousing them in, but I would have waited until you understood the point of doing the CC11 in the first place. I could have also said to layer similar articulations together to thicken up your sound, which is entirely possible on EWQLSO, but the wash of the baked-in reverb might not offset the benefit of a thicker sound. It's not to achieve perfect realism, it's to approximate it. If you have EQWLSO Platinum, then I'm sure that a capable person can follow the advice without reaching a barrier that the library itself has created.
  10. So in other words you want a support orchestra. String beds, flute leads, etc., but not the whole thing.
  11. Listening to something and understanding something are two completely different things. I don't believe you processed them fully enough to train your brain. If you did, your music would be more refined. Just calling it like it is. You say it's "easier said than done", which is true, but it's not an excuse to spend less time on it and assume you'll never get it. Assuming you'll never get it is a great way to not get it. So you have to put more of your time into learning how to write for orchestra if you're going to want to write something more realistic. I do hear the similar styles, but you're still missing the expression present in the SMG example, real or not. Your orchestra is a little distant, which means the close / room / hall mic mix is skewed towards the hall mic (or away from the close mic) or you don't have the flexibility to mix all three together (you wouldn't beneath Platinum version of EWQLSO I believe). Or your samples are pre-baked in reverb. Your brass in particular is noticeably lacking dynamic crossfade automation, so the high-dynamic "blatty" tone to the brass is constantly there, without an emulated decrease in incoming breath via a lower dynamic (often done with CC1 or CC11). Even if you don't have dynamic crossfade, you could at least record volume event edits. It's not the same thing, but it approximates it. You haven't fully accounted for the slow attacks of certain articulations (particularly in the non-staccato brass and non-staccato strings), so the slow articulations are late. Therefore, it would help to shift those slow articulations back in time a little to make sure they are more on-rhythm. As Slimy mentioned, a lot of your notes sound quantized. One way to help that is to write bigger chords, and offset the notes in your chords. This at once gives you a bigger sound and a bit more flexibility when it comes to aligning the note transients.
  12. As I mentioned earlier, on this page, the albums are listed; underneath the titles are the labels OCRX-00YY. That tells you what album number it is. If it's OCRC, then it's commercial, but OCRA is free. So the TMNT album would be listed as OCRA-0047.
  13. Are you talking about the albums? They're right here: http://ocremix.org/albums/
  14. A bit conservative in the beginning, but the solo and the bass licks help a lot. Yeah, I'd say submit it! Not the most interpretive in the compositional sense when it comes to how the source itself is interpreted, but I think it's above the bar due to the textural dynamic variation and the added solos and bass phrases.
  15. Definitely agreed with the crits & comments; the drums were Groove Bias though - pretty old, so probably not much else I could do there on the snare. Would definitely do it though if I could. By the way, something cool I remembered is that I spent like 2 hours rendering individual stems, so if you want, you could go here to find this ReMix, except you can isolate every single instrument. Could be good for folks on ear training! (Unfortunately it takes so long that it's the only ReMix that I have on there!)
  16. @Furorezu Huh, sorry, I didn't notice this when you posted it. Whatever you did, it worked; I think you didn't add your post-mix reverb, because it doesn't sound nearly as distant, and it has a decent bass presence. I'm not on my good headphones so I can't say much more than general advice for comparison to what you're doing, but it's definitely an improvement in the perceived audio fidelity. ----- General advice (not necessarily directed toward your mix, but to you in general, so you know what to look for): One thing to watch out for is adding what you call your "post-mix reverb". If you lower your dry mix (the input signal), then you get a more distant sound since you retain more of your reverberated output (the wet signal) than your dry signal. It's like taking the instrument out of the room, but keeping its "reverberated essence", which isn't exactly realistic. Also, it sounds to me like you're adding it on the Master track, which is not necessarily how you should do it. In a digital environment, you're free to add reverb onto each instrument separately. Technically, even a bass guitar would have natural reverb in a room, but in a mix it adds muddiness (frequency overlap in the 100~300 Hz range, more or less) since the low frequencies are being reverberated somewhat as well. So for example, if you begin with identical reverb settings for a bass guitar and rhythm guitar, you should raise the low cut within the reverb plugin that's assigned to the bass, so that the wet signal is high-passed (has its low frequencies filtered out). The wet signal is just the processed (reverberated, in this case) output due to the given input. For a bass, generally a low cut frequency of ~200 Hz is about right, since it cuts out the low end reverberations, but not the midrange + treble, and your bass still mostly has its natural reverberation behavior. For a rhythm guitar, a low cut frequency of 300~400 Hz, more or less, would accomplish the similar goal. This way, the kick and snare's low frequencies, which are collectively around 40~250 Hz overall, should be clearer.
  17. huh oh well Super creative. Few remixers would think of doing it in this style for a battle theme, and incorporating the source into the bass and e. piano. Great work!
  18. @Nase has a good point; For example, depending on the sample, I actually sometimes like stiffly-sequenced chords when I write electric piano chords that are processed through a wah pedal, since it creates a spikier transient that has more presence, whereas having slightly offset notes softens the transients a little. Or, sometimes I like to gate big chords for effect, which is a way to "sequence" stiffly.
  19. About to start grad school in a few hours! Ready, set, bye-bye, a-go-go!

  20. Looks to me like this says it: You're given a selection of sources, and you work it out with your teammates which one goes to who. And here, sounds like in the boss-battle rounds, you can pick amongst the three sources your three-person squad has picked.
  21. No problem; they're not major things, and this actually sounds good already. It's just advice for further improvement.
  22. Oh, also, in terms of having overlapping treble frequencies, it may just appear to be that way because the saw chords are very detuned. Not necessarily a bad thing, as soundcloud encodes in 128 kbps, which can degrade your upper-treble frequencies anyhow. I think it sounds sufficiently well-mixed up there. What I would examine are the drops at 1:00 and 2:13. To me, they kind of don't live up to the hype you create in your buildups. I'm not sure if a cymbal is just buried in the mix, but an audible cymbal at those transition points should help improve the drops. I also think at 1:43, the lead could be more different in tone from the panning arpeggio. Right now it sounds like the same waveform, which blends together "too well" and decreases the clarity of the mix. What if you added a filter LFO for motion, and some distortion for harmonic strength? Other than those nitpicks, I think this is pretty close to what you were going for.
  23. I do too, but I never use it, because it's not an insta-solution, per se. No computer algorithm quite matches the "randomness" of a human's rhythm, sustained-note length, and playing intensities (especially if the same humanization preset is applied each time), so having a MIDI keyboard helps me get over those barriers. Yes, I do have 8+ years of piano experience, but honestly, I could be much better at improvising. I still have enough ability to play chords (for the human rhythm), solos/melodies (hoping for that happy accident!), and short passages that I can touch up later. However, I'm not one of those guys that plays things in perfectly the first/second time, and I could probably play what I play today, just as capably with 1 year of piano experience or less. I'm not bragging, by the way - I actually mean that in a humble and encouraging way. You can feel free to apply the advice to your own music or just discuss it, but I think many people (including you) can benefit from applying what was said in this discussion. IMO, it's one of those skills I think you or others can learn early, and it can be a nice tool to have while you work on other related things, like hearing chords in your head, picturing soundscapes before you write them, imagining melodies before writing them, etc.
×
×
  • Create New...